added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2023-12-29T05:06:31.518Z
|
2023-12-27T00:00:00.000
|
266569895
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0289052&type=printable",
"pdf_hash": "317c5a50cc603eabdabce536131b927848e4a5a6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46382",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "317c5a50cc603eabdabce536131b927848e4a5a6",
"year": 2023
}
|
pes2o/s2orc
|
Comparison of devices used to measure blood pressure, grip strength and lung function: A randomised cross-over study
Background Blood pressure, grip strength and lung function are frequently assessed in longitudinal population studies, but the measurement devices used differ between studies and within studies over time. We aimed to compare measurements ascertained from different commonly used devices. Methods We used a randomised cross-over study. Participants were 118 men and women aged 45–74 years whose blood pressure, grip strength and lung function were assessed using two sphygmomanometers (Omron 705-CP and Omron HEM-907), four handheld dynamometers (Jamar Hydraulic, Jamar Plus+ Digital, Nottingham Electronic and Smedley) and two spirometers (Micro Medical Plus turbine and ndd Easy on-PC ultrasonic flow-sensor) with multiple measurements taken on each device. Mean differences between pairs of devices were estimated along with limits of agreement from Bland-Altman plots. Sensitivity analyses were carried out using alternative exclusion criteria and summary measures, and using multilevel models to estimate mean differences. Results The mean difference between sphygmomanometers was 3.9mmHg for systolic blood pressure (95% Confidence Interval (CI):2.5,5.2) and 1.4mmHg for diastolic blood pressure (95% CI:0.3,2.4), with the Omron HEM-907 measuring higher. For maximum grip strength, the mean difference when either one of the electronic dynamometers was compared with either the hydraulic or spring-gauge device was 4-5kg, with the electronic devices measuring higher. The differences were small when comparing the two electronic devices (difference = 0.3kg, 95% CI:-0.9,1.4), and when comparing the hydraulic and spring-gauge devices (difference = 0.2kg, 95% CI:-0.8,1.3). In all cases limits of agreement were wide. The mean difference in FEV1 between spirometers was close to zero (95% CI:-0.03,0.03), limits of agreement were reasonably narrow, but a difference of 0.47l was observed for FVC (95% CI:0.53,0.42), with the ndd Easy on-PC measuring higher. Conclusion Our study highlights potentially important differences in measurement of key functions when different devices are used. These differences need to be considered when interpreting results from modelling intra-individual changes in function and when carrying out cross-study comparisons, and sensitivity analyses using correction factors may be helpful.
Introduction
Blood pressure, grip strength and lung function are commonly assessed in longitudinal population studies.All three are non-invasive measures of physiological function that are practical for a nurse or interviewer to administer in a home or clinical setting using portable equipment.They avoid the subjectivity of self-reports of health, enable researchers and clinicians to track changes in health and function over the life course [1] and are important biomarkers of healthy ageing [2].Their repeat assessment within longitudinal studies, and inclusion in many studies, facilitates comparisons over time and across ages and cohorts [3,4].
Although there have been a number of initiatives to encourage standardisation of these measures [5][6][7], different devices have been adopted by different studies for a variety of practical reasons [8,9].Furthermore, the device used within a long-running longitudinal study will often need to change over time as obsolete or outdated models are replaced with devices that are more technologically advanced and improve or extend measurement, are less costly, more portable or easier to use.Because devices of this kind are only subject to moderate regulation [10,11], the measures obtained from different makes and models of device are unlikely to be equivalent.This has important implications for research which either compares findings across studies or considers change in function longitudinally.For example, in a study modelling age-related changes in blood pressure across the life course which used data from eight British longitudinal studies, switching from a manual sphygmomanometer to an automated device, without correction for the difference in measurement, resulted in a steeper increase in mean trajectory of systolic blood pressure [4].Similarly, artefactual findings attributable to a change in device have been observed in studies of lung function [12,13].Indeed, concerns about potential differences in measures due to differences in spirometry devices have contributed to study investigators in the UK discouraging within-and cross-study analyses [14,15].
There are existing studies which have shown differences between devices used to measure blood pressure [16][17][18][19][20], grip strength [21][22][23][24] and lung function [6,12,13,[25][26][27], but these have not yet compared all the devices commonly used in cohort and longitudinal population studies in the UK and many other countries.Further, these are only occasionally discussed in the context of both within-and between-study comparisons.To address this gap, a randomised cross-over trial was undertaken to compare measurements between devices used to assess blood pressure, grip strength and lung function commonly used in UK longitudinal population studies within the CLOSER consortium [28].
Study design and sample
For each of blood pressure, grip strength and lung function, a randomised cross-over study was carried out, so as to make within-person measurement comparisons.The study was conducted following established (CONSORT) guidelines [29].The target sample, based on sample size calculations (S1 Appendix), was 120 men and women from the general population aged 45 to 74 years comprising 20 men and 20 women from each of three age groups (45-54, 55-64, 65-74).Participants were drawn from a list of individuals who had participated in a market research study, consented to be re-contacted for research purposes, and were living in London and the South East of England.An invitation letter and information sheet was sent and this was followed-up with a telephone recruitment process including assessment of health-related exclusion criteria (S1 Appendix).Eligible participants were then invited to attend a face-toface assessment and each participant was measured on every machine (Table 1) at a single assessment visit.
All 90-minute face-to-face assessments took place in central London between October 2015 and January 2016 and were conducted by one of seven researchers who were trained and tested in all relevant protocols.All participants gave informed, written consent.The analytical dataset was pseudo anonymised with each participant given a study number so that individuals could not be identified.Ethical approval for data collection was given by University College London (UCL) (Ethics Project Number: 6338/001) and, for analysis, by the University of Southampton (Ethics Project Number: 18498).Participants received feedback on their results, advice to contact their General Practitioner if their blood pressure was elevated, and a gift voucher.
During the assessment, each participant was assessed in the sequence shown in Table 2. Blood pressure was measured consecutively on each device and the remaining measures were ordered to ensure that there was sufficient time between the four grip strength and two spirometry measurements to avoid participant fatigue.Multiple measurements were recorded on each device as would be done in survey research.Height and weight were also measured and a short self-completion questionnaire was administered (S2 Appendix).For each of the three measures, the order of devices was determined before fieldwork began, using computer-generated random numbers within each age-sex strata.Individuals were randomly allocated to one of two possible orders of blood pressure and lung function devices and to one of 24 possible orders of grip strength devices.
Blood pressure, grip strength and lung function measurement
Standardised measurement protocols were used as follows.For blood pressure, the participant was asked to sit on a chair with legs uncrossed and their right arm resting comfortably, palm up, on a table, with the sphygmomanometers positioned so that they could not see the display.The participant was asked to expose their right arm, making sure that rolled up sleeves did not restrict circulation and that any watches or bracelets had been removed and, the sphygmomanometer cuff was then positioned over the brachial artery.After 3 minutes of quiet rest, 3 readings with a minute's rest between each reading were recorded using the first device.The device was then changed and after a further 2 minutes rest, 3 readings were taken using the second device.There was no talking until three readings on both devices had been completed.Grip strength assessment was based on a published measurement protocol [30].While seated in a chair with fixed arms, participants were asked to place their forearm on the arm of the chair in the mid-prone position (the thumb facing up) with their wrist just over the end of the arm of the chair in a neutral but slightly extended position.Adjustments were made to each dynamometer to accommodate different hand sizes according to the make and model of the device.On hearing the words "And Go", the participant was encouraged, through strong verbal instruction, to squeeze as hard as possible for a few seconds until told to stop.For each device, two measurements were carried out in each hand in the sequence Left-Right-Left-Right.The value on the display was recorded to the nearest 0.1kg for the Jamar Plus+ and Nottingham Electronic, to the nearest 0.5kg for the Smedley and to the nearest 1kg for the Jamar Hydraulic.
Lung function measurements adhered to the American Thoracic Society/European Respiratory Society (ATS/ERS) lung function protocol [6].The procedure was explained and demonstrated, and the participant then had a practice blow without completely emptying their lungs.All measurements were carried out with the participant standing unless they felt unable to do so.During measurement, maximum effort was encouraged verbally.In addition, the ndd Easy on-PC was linked to a laptop which showed a cartoon of a child blowing up a balloon.This represents a real-time trace and as the participant is encouraged to exhale until the balloon pops this helps ensure a maximal FVC is achieved.After each trial the researcher recorded whether it satisfied the protocol, for example a trial was classified as not valid if the participant did not form a tight seal around the mouthpiece or coughed during the procedure, and in these instances, feedback was provided before the next attempt.Participants had up to five attempts to produce three valid measurements of lung function from each spirometer.
Readings for blood pressure, grip strength and lung function using the Micro Medical spirometer were data entered twice, independently, and compared to ensure accuracy.Lung function readings taken using the ndd Easy on-PC spirometer were downloaded directly from the laptop.
Other measures
Height was measured using a portable Marsden Leicester stadiometer and weight using Tanita 352 scales according to standardised procedures, from which body mass index was calculated as weight (kg)/height (m) 2 .Responses to the self-completion questionnaire provided additional information on: age at completing full-time education, self-rated health, smoking history, medication use and musculoskeletal, cardiovascular and respiratory conditions which might influence performance on the functional tests (S2 Appendix).
Primary outcome measures
For the purposes of the main analyses, outcomes commonly used in epidemiological research were derived.The mean of the second and third readings of systolic blood pressure and diastolic blood pressure in millimeters of mercury (mmHg) were used.For grip strength, the maximum of the four readings in kilograms (kg) was used.For lung function, the maximum forced expiratory volume in 1 second (FEV 1 ) and forced vital capacity (FVC) in millilitres (ml) from the highest quality readings (quality A or B) were used.Quality grade A was when 3 or more acceptable tests were achieved with repeatability within 100 ml, and B when 3 acceptable tests were achieved with repeatability within 150 ml, as per ATS/ERS criteria [6].
Statistical analyses
We described relevant characteristics by randomisation group for each measure.For each device we estimated the reliability using intraclass correlations (or Rho) and within-person standard deviations using a variance-components model [31].To investigate order effects we used two sample t-tests to compare the difference in mean values between groups with the measurements carried out in one sequence (device A followed by device B) compared with the opposite order (BA).For grip strength where 4 devices were tested, 6 pairwise comparisons were made, ignoring the exact placement of devices within the sequence.
We calculated the differences in measurement between pairs of devices then assessed the mean within-person differences between pairs of devices using paired t-tests.The assumption that the mean differences were normally distributed was checked by plotting histograms, and Bland-Altman plots (the difference between measures versus the average of the measures from the two devices for each individual) were used to assess whether the variation was dependent on the magnitude of the measurements [32,33].The mean difference in values between the two devices, and the 95% limits of agreement, which give the range in which we would expect 95% of future differences in measurements between the two devices to lie, were plotted [33,34].
We also performed a series of sensitivity analyses to test the robustness of the results.We repeated analyses having: (i) excluded measurements where the devices were administered in the incorrect order (n = 2 for blood pressure, n = 5 for grip strength and n = 1 for lung function); (ii) removed extreme outliers identified using scatter plots (n = 1 for blood pressure and n = 2 for grip strength) and; (iii) used alternative outcome definitions commonly used in analyses.For blood pressure, we considered the mean of three readings [35] and the second reading only [36] and for grip strength, the mean of the four readings [37,38].For lung function, we used the highest reading of FEV 1 and FVC drawn from all available readings irrespective of whether they adhered to the ATS/ERS quality criteria.
Finally, we used multilevel modelling, as an alternative statistical approach, to estimate the differences between devices, using all available readings rather than a summary measure, in order to account for variance between readings.The models treat the repeated readings as Level 1 and the individual as Level 2 to account for non-independence of measurements from the same person.Model 1 included device treated as a fixed effect.Model 2 also included covariates to account for the order in which the devices were administered and the position of the reading in the sequence (1 to 3 for blood pressure, 1 or 2 for the dominant and non-dominant hands for grip strength, and 1 to 5 for lung function).Model 3 was additionally adjusted for age, sex and, for blood pressure only, body mass index.
Data cleaning and management were carried out using Excel, IBM-SPSS Version 22 and STATA 14.0 and analyses were conducted using STATA 15.0.
Results
During fieldwork, 118 assessments were completed, with 18-21 participants in each of the agesex strata (S1 Table ).Of the seven researchers, three carried out 20-30 assessments, two carried out 10-20 assessments and two carried out fewer than ten assessments.
The socio-demographic characteristics of the randomised groups were reasonably well balanced as were key aspects of cardiovascular, musculoskeletal and respiratory health (Tables 3 and 4).The reliability of every device was good.The intra-cluster correlations were lowest for blood pressure (0.89-0.94), due to the acknowledged within-person variation in this measure (S2 Table ).The values for grip strength of dominant hand were above 0.95 for all devices except the Smedley dynamometer (0.92).Reliability was best for lung function (�0.96),where within-person standard deviations were small.Reliability was slightly better when including only assessments adhering to the ATS/ERS quality criteria because two measures must be within 150ml of each other.There was no evidence of order effects for blood pressure or lung function.For grip strength, there was evidence of an order effect for the comparison between the Nottingham Electronic and Smedley dynamometers (difference = -3.08kg(95% CI = -5.93,-0.23, p = 0.03) (S3 Table ).Histograms show that for all three measures, the mean differences between devices were approximately normally distributed (S1 Fig).
Blood pressure
Three participants were excluded from analyses due to missing readings leaving 115 for analysis.The mean difference in SBP between the two devices was 3.9mmHg (95% CI: 2.5, 5.2, p<0.001) and for DBP was 1.4mmHg (95% CI: 0.3, 2.4, p = 0.1), with the Omron HEM-907 measuring higher than the Omron 705-CP (Table 5).The Bland-Altman plots showed that as blood pressure increased, the difference between the two devices remained approximately constant (Figs 1 and 2).The limits of agreement were wide, being -10.6 to 18.3mmHg for SBP and -9.8 to 12.5mmHg for DBP.
Grip strength
All 118 participants were included in the analyses.There was no evidence of a difference in mean maximum grip strength when comparing the two electronic dynamometers, the Nottingham Electronic and Jamar Plus+ (difference = 0.3kg (95% CI: -0.9, 1.4, p = 0.6), or when comparing the hydraulic and spring-gauge dynamometers, the Jamar Hydraulic and Smedley (difference = 0.2kg (95%CI:-0.8,1.3, p = 0.7).However, there were mean differences in maximum grip strength of between 4 and 5kg when comparing either of the electronic dynamometers with either the hydraulic or spring-gauge dynamometer (Table 5).The limits of agreement varied depending on the pair of devices being compared, for example, these were narrower (-2.0 and 10.1 kg) when comparing the Jamar Plus+ and Jamar Hydraulic but very wide (-10.6 and 20.5 kg) when comparing the Nottingham Electronic and Smedley dynamometers.Even in cases where the mean difference was near zero, the limits of agreement indicated substantial differences in measurement between devices.The Bland-Altman plots (Figs 3-8) showed that for the comparisons of the Smedley dynamometer with all other devices, the difference increased at higher magnitudes of mean grip strength (Figs 4, 6 and 8).
Lung function
Twelve participants had missing lung function measures and just under a third (n = 32 for FEV 1 and n = 39 for FVC) of the remaining participants were excluded because there were no readings of a sufficiently high quality.There was no evidence of a difference in mean FEV 1 between devices (difference = 0.00 litres (95% CI:-0.03,0.03,p = 0.9)) but there was evidence of a difference in FVC (-0.47 litres (95% CI:-0.53,-0.42,p<0.001)) with the ndd Easy on-PC measuring higher than the Micro Medical (Table 5).The Bland-Altman plots suggested that for FEV 1 , the difference between the two devices was approximately constant as measurements increased and close to zero (Fig 9) with reasonably narrow limits of agreement (-0.25 and 0.25 litres).The plot for FVC suggested that the difference between devices remained constant as values of FVC increased (Fig 10) but the limits of agreement were wider (-0.92 and -0.03).
Sensitivity analyses
When we repeated the analyses having excluded measurements where the devices were administered in the incorrect order (n = 8), removed outliers (n = 3), included the lung function readings that did not meet ATS/ERS criteria (n = 32 for FEV 1 and n = 39 for FVC), and used alternative definitions of outcomes, there were only small changes in the estimated differences between devices such that the conclusions were unaltered (S4 Table ).The only differences found were a small number of additional order effects (S5 Table ), but these had no impact on the findings when order of device was controlled for through multilevel analysis.Indeed, when the data were reanalysed using multilevel models, the estimates of differences between devices showed only marginal changes, though the standard errors were reduced (S6-S8 Tables).
Discussion
In a randomised cross-over study of 118 adults aged 45-74 years, we found evidence of differences in measurement of blood pressure, grip strength and lung function when assessed using different devices.For blood pressure, the newer Omron HEM-907 measured higher than the older Omron 705-CP with wide limits of agreement.For grip strength, the two electronic dynamometers recorded measurements on average 4-5kg higher than either the hydraulic or the spring-gauge dynamometer, but there were only small mean differences when comparing the two electronic dynamometers or the hydraulic and spring-gauge dynamometers.However, limits of agreement were wide for all comparisons.For lung function, the ndd Easy on-PC measures of FVC were an average of 0.47 litres higher than those for the Micro Medical, but there was no difference between measures of FEV 1 and the limits of agreement were reasonably narrow.We are aware of only a few studies that have compared combinations of dynamometers previously.For example, King [21] compared the Jamar Hydraulic with the Jamar Plus+ dynamometer and, in contrast to our findings, reported that the electronic dynamometer had consistently lower readings than the hydraulic device and narrower.However, the study population was younger, with an average age of 32 years, comprising a convenience sample of 40 men and women and may have better function than our older sample which could influence comparability across machines.Another study reported a difference of 3.2kg (limits of agreement -6.3 to 12.6) when comparing the Smedley dynamometer and the Jamar Hydraulic dynamometer, which contrasts with our finding of a smaller mean difference (0.2kg) but wider limits of agreement (-10.8 to 11.3) [22].However, this other study was carried out in an older, smaller sample of 55 participants aged 65-99 years recruited from a retirement home and social day care centre.Another study [23], found that the Smedley dynamometer measured lower than the Jamar+ Digital, similar to our study, although in this other study there were other potentially important variations in measurement protocol-measures using the Smedley device were undertaken in a standing position and those using the Jamar device were undertaken seated.Our findings provide some reassurance that there is a lack of bias in measurement between specific device combinations (i.e. the Jamar Plus+ and Nottingham electronic; the Jamar Hydraulic and Smedley), although the limits of agreement suggest that the variation can still be substantial.
We have not identified a comparison of Micro Medical or other turbine spirometers with the ndd Easy on-PC spirometer.However, in a study of 35 volunteers, the Micro Medical turbine spirometer, used in our study, gave lower readings compared with the Vitalograph Micro pneumotachograph spirometer [13], for both FEV 1 (mean difference of 0.24l) and FVC (0.34l).Another study of 49 volunteers found that the handheld ndd Easy on-PC spirometer produced systematically lower values than a pneumotachograph spirometer (Masterscreen) [25], for both FEV 1 (mean difference of 0.24l) and for FVC (0.37l).
For lung function, the accuracy of measurement relies primarily on optimal coaching: maximally deep breath, a rapid blast and appropriate encouragement as well as a full seal around the mouthpiece and correct body posture [6].The ndd Easy on-PC spirometer presents visualisation of the volume-time graph in real time, meaning that the participant can be encouraged to blow until the curve has reached a plateau, that is, when the true FVC has been achieved.In the absence of this visual display the forced manoeuvre may be terminated prematurely, and the FVC underestimated.We propose that this is the most likely explanation for the substantially higher FVC values obtained using the ndd Easy on-PC device than the Micro Medical device in our study, while there was no difference for FEV 1 .For FEV 1 the mean difference between the 2 spirometers was zero and are, therefore, within the 150ml ATS/ERS criteria for replication of measurement.In addition, the limits of agreement did not exceed the 350ml criterion set in previous spirometry studies [27].Whether using a group correction for FVC is valid, however, remains debateable as in the SAPALDIA study, a group correction from a quasi-experimental study was found not to be adequate, and an approach using spirometerspecific reference equations from longitudinal measurements to describe individualised corrections terms was preferred [12].
In considering the potential clinical significance of the differences between devices, we have referred to published normative or predicted values of blood pressure, grip strength and lung function [3,39,40].Based on analysis of age-related differences in mean blood pressure in the Health Survey for England 2016, the mean differences in SBP and DBP between devices that we observed are equivalent to an age difference of approximately five years, although the possible non-linearity of change with age in diastolic blood pressure across the age range of interest [41] that comparison more difficult.Further, the within-person standard deviation for systolic blood pressure is larger than the mean difference between devices.For grip strength [3] the observed 4-5kg difference in grip strength is equivalent to an age difference of approximately 5 years among men and 10 years among women aged 65 years and above.For lung function, based on the National Health and Nutrition Examination Survey (NHANES) III data [42], predicted values for five-year age-groups (with male height of 175cm and female height of 160cm), show that a difference of 0.47l in FVC is equivalent to an age difference of around 15 years, between 45-75 years.Therefore, together with the wide limits of agreement and good measurement reliability for each device, the difference that we observed between devices are likely to have important practical implications for both grip strength and lung function.For example, the differences in dynamometers may result in discrepancies in clinical diagnoses which use cut-points when identifying an individual as sarcopenic [43].Similarly, the difference in FVC, but not FEV 1 , between machines will have implications for defining participants with COPD based on the ratio FEV 1 /FVC.Maintaining consistency in the make and model of device used in studies reduces the likelihood of measurement differences, but is not always realistic given that equipment becomes obsolete and new technology can improve measurement, for example through automation (as is the case with the Omron 907), the transition from analogue to digital (as is the case with the transition from the Jamar hydraulic to Jamar Plus+ devices) or the introduction of visual encouragement and specific feedback (as provided by the ndd Easy on-PC).An important implication of our findings is that it would be advisable for researchers, therefore, to include simple experiments to assess machine comparability when a new device is introduced into a study.Conducting external comparison studies, such as ours, would also help interpretation for both within-study and between-study comparisons.In addition, the differences between devices need to be considered in the context of reliability of measurements for each device being compared.Our analysis showed good reliability of measurements, particularly for the dynamometers and spirometers, suggesting the differences observed are important.The ATS/ ERS quality control for lung function ensures excellent reliability, but does result in exclusion of those who cannot meet the criteria.
A key strength of this study design was that it used the same standardised measurement protocols for all devices, which is important, as for all three functional measures, the type of device is only one of several factors which can affect measurements unless these other factors are kept constant as in our study.Blood pressure is affected by multiple factors [10] including the participant talking, actively listening, being exposed to cold, ingesting alcohol, having a distended bladder, recent smoking [44] and also to measurement protocols such as arm position and cuff size [45].For grip strength, the values and precision of measurements have also been shown to be influence by a range of factors [30,37] including whether allowance is made for hand size and hand-dominance [46], dynamometer handle shape [47], position of the elbow [48] and wrist during testing [49], setting of the dynamometer [50,51], effort and encouragement, frequency of testing and time of day and training of the assessor [30,51].The study also included a relatively large sample size, based on a priori sample size calculation, compared with other similar studies, and implemented a randomised design.While confidence in the results rests primarily on this randomised design [29], the fact that participants were drawn from a large database of members of the public, who had been involved in previous market research and consented to be re-contacted, suggests they may be more representative of the general population than the small-scale volunteer samples used in many previous studies.We also acknowledge the limitations of the study.The study findings cannot be generalised beyond the parameters of the research design; for example, results might differ for those outside the sampled age range (i.e., 45 to 74), and while the trial compared devices most commonly used in UK population-based studies, no comment can be made about device combinations which were not included [15].While standardising the measurement protocols was an important aspect of the research design, it meant deviating from the protocol for the Smedley dynamometer (normally assessed standing rather than sitting) and so may limit the applicability of the findings for this device [30].Furthermore, in the primary analyses of lung function, a number of participants were excluded due to missing or low-quality readings, particularly on the ndd Easy on-PC, thus reducing the sample size and power of these analyses.Nevertheless, sensitivity analyses using all available readings, irrespective of quality, suggested that this did not have a big impact on findings.Indeed, sensitivity analyses considering outliers, incorrectly ordered tests and alternative coding of measures, all showed that our results were robust.Assessor may be a source of variation in our study which we have not accounted for, although this variation was minimised by the consistent training and protocol, and is not likely to have had a substantial impact on differences between devices since this was a withinperson comparison and the same researcher assessed the same person on all machines.
In conclusion, this randomised cross-over study showed measurement differences between devices commonly used to assess blood pressure, grip strength and lung function which researchers should be aware of when carrying out comparative research between studies and within studies over time.
Table 4 . Cardiovascular, musculoskeletal and respiratory health status of the study population by first device used (N = 118).
a Includes doctor diagnosed heart attack, angina and other heart condition b Includes eczema, hay fever, asthma, COPD, bronchitis, emphysema and other respiratory problems.https://doi.org/10.1371/journal.pone.0289052.t004
Table 5 . Differences in mean and limits of agreement for each pair of devices used to measure blood pressure, grip strength and lung function.
* p-value from paired t-test.https://doi.org/10.1371/journal.pone.0289052.t005
|
v3-fos-license
|
2018-11-11T01:39:44.598Z
|
2018-10-22T00:00:00.000
|
53106365
|
{
"extfieldsofstudy": [
"Medicine",
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.thelancet.com/article/S2214109X18304376/pdf",
"pdf_hash": "3c1079eedc2aa575d78f9a16df801d97f5a2a264",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46383",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "062308a02b277eb879669e55efd106a858dc4f0e",
"year": 2019
}
|
pes2o/s2orc
|
Introducing the World Bank’s 2018 Health Equity and Financial Protection Indicators database
Among the many shifts of emphasis that have been evident in global health over the past 25 years or so, two stand out: a concern over the poor lagging behind the better off in progress towards global goals; and a concern to look beyond whether people get the services they need to the affordability of the out-of-pocket expenditures associated with these services. These concerns over health equity and financial protection were absent from the Millennium Development Goals (MDGs), but are integral to the Sustainable Development Goals (SDGs). The World Bank’s 2018 Health Equity and Financial Protection Indicators (HEFPI) database is a new global resource for tracking progress on both fronts. It is, in effect, the fourth in the series of such databases. The first two (2000 and 2007) focused on maternal and child health and communicable diseases, and drew on data from Demographic and Health Surveys. The third (2012) added data from Multiple Indicator Cluster Surveys and World Health Surveys, non-communicable disease (NCD) and financial protection indicators, and high-income countries. The 2018 database continues this broadening-out, including more health indicators, more countries, and more years of data. Disaggregated health data by wealth quintile are reported in all four datasets. The 2018 database includes 18 indicators of service use (12 preventative, six curative) and 28 health outcome indicators. The financial protection indicators capture the proportions of the population incurring catastrophic expenses (those exceeding a specified proportion of a household’s total consumption or income) or impoverishing expenses (expenses without which the household would have been above the poverty line, but because of the expenditures is below the poverty line). The health indicators include both MDG-era indicators and SDG-era (eg, NCD) indicators, and the financial protection indicators include those that reflect the SDG catastrophic expenditure threshold (10%) and the SDG international poverty line (US$1.90 per day). The data are calculated from household surveys, identified mostly through searches of data catalogues and websites of multicountry survey initiatives. None come from official reports by national governments, in part because such data do not lend themselves to disaggregation by household living standards, and in part because of concerns about accuracy, especially where governments do not face incentives to report accurate numbers. Where we have been able to access the raw microdata, we have done so, mostly because indicator definitions can vary from one survey family to another, and sometimes even within a survey family. The estimates we report are simply direct (re) calculations of the quantities reported in the survey reports, harmonised as much as possible across surveys subject to the constraints imposed by the wordings of the original questions. In line with the growing concerns about the use of modelling in global health datasets, we do not produce forecasts for country-years where there is no survey. Nor do we replace estimates directly
Introducing the World Bank's 2018 Health Equity and Financial Protection Indicators database
Among the many shifts of emphasis that have been evident in global health over the past 25 years or so, two stand out: a concern over the poor lagging behind the better off in progress towards global goals; and a concern to look beyond whether people get the services they need to the affordability of the out-of-pocket expenditures associated with these services. These concerns over health equity and financial protection were absent from the Millennium Development Goals (MDGs), but are integral to the Sustainable Development Goals (SDGs).
The World Bank's 2018 Health Equity and Financial Protection Indicators (HEFPI) database 1 is a new global resource for tracking progress on both fronts. It is, in effect, the fourth in the series of such databases. The first two 2,3 (2000 and 2007) focused on maternal and child health and communicable diseases, and drew on data from Demographic and Health Surveys. The third 4 (2012) added data from Multiple Indicator Cluster Surveys and World Health Surveys, non-communicable disease (NCD) and financial protection indicators, and high-income countries. The 2018 database continues this broadening-out, including more health indicators, more countries, and more years of data. Disaggregated health data by wealth quintile are reported in all four datasets.
The 2018 database includes 18 indicators of service use (12 preventative, six curative) and 28 health outcome indicators. The financial protection indicators capture the proportions of the population incurring catastrophic expenses (those exceeding a specified proportion of a household's total consumption or income) or impoverishing expenses (expenses without which the household would have been above the poverty line, but because of the expenditures is below the poverty line). The health indicators include both MDG-era indicators and SDG-era (eg, NCD) indicators, and the financial protection indicators include those that reflect the SDG catastrophic expenditure threshold (10%) and the SDG international poverty line (US$1.90 per day).
The data are calculated from household surveys, identified mostly through searches of data catalogues and websites of multicountry survey initiatives. None come from official reports by national governments, in part because such data do not lend themselves to disaggregation by household living standards, and in part because of concerns about accuracy, especially where governments do not face incentives to report accurate numbers. [5][6][7] Where we have been able to access the raw microdata, we have done so, mostly because indicator definitions can vary from one survey family to another, and sometimes even within a survey family. The estimates we report are simply direct (re) calculations of the quantities reported in the survey reports, harmonised as much as possible across surveys subject to the constraints imposed by the wordings of the original questions. In line with the growing concerns about the use of modelling in global health datasets, 8,9 we do not produce forecasts for country-years where there is no survey. Nor do we replace estimates directly calculated from the survey microdata by modelled estimates. The downside is that our dataset is full of gaps. The upside is that, insofar as the surveys we use are reliable, differences over time or across countries ought to reflect reality rather than modelling assumptions; conversely, when real changes occur on the ground, they ought to get reflected in our numbers, rather than being smoothed away by the modelling process. The health data were checked against the reports and websites of the original surveys where possible; differences are typically small and due to our harmonisation of definitions across surveys. The health data were also checked to make to make sure they lie in the required range. The financial protection estimates were subject to several internal and external checks, which led to us to drop several household surveys, including several entire survey families.
Number of indicators
The HEFPI database now covers 193 countries, up from 109 previously, and draws on over 1600 surveys, up from just 285. The table shows the variation across groups of indicators in terms of the number of datapoints and countries with data, for the population and quintile data. On average, across the 51 indicators, we have population data for just over 90 countries, with an average of 2·5 years of data per country. For the financial protection indicators, we have data for over 140 countries; for the SDG health service coverage indicators, we have fewer countries. We also have less disaggregated data, since we report disaggregated financial protection data only for high-income countries, and some NCD surveys do not collect the data necessary to disaggregate by household living standards.
The 2018 HEFPI dataset is freely downloadable, and a data visualisation tool is also available. To ensure the data are reproducible, and in line with the Guidelines for Accurate and Transparent Health Estimates Reporting, 10 we document our methods thoroughly in a working paper 1 and highlight the differences between our definitions and others'; we also provide the essential computer code used to produce the estimates.
|
v3-fos-license
|
2021-08-12T06:23:49.201Z
|
2021-08-10T00:00:00.000
|
236977761
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.16854",
"pdf_hash": "041742868e6b1698ae9141bec395b3e126261464",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46384",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "1d7668fe4f8f2a8f4ed48264c57970bd504faaec",
"year": 2021
}
|
pes2o/s2orc
|
Activation of formyl peptide receptor 1 elicits therapeutic effects against collagen‐induced arthritis
Abstract Rheumatoid arthritis (RA) is an autoimmune disorder which shows production of autoantibodies, inflammation, bone erosion, swelling and pain in joints. In this study, we examined the effects of an immune‐modulating peptide, WKYMVm, that is an agonist for formyl peptide receptors (FPRs). Administration of WKYMVm into collagen‐induced arthritis (CIA) mice, an animal model for RA, attenuated paw thickness, clinical scores, production of type II collagen‐specific antibodies and inflammatory cytokines. WKYMVm treatment also decreased the numbers of TH1 and TH17 cells in the spleens of CIA mice. WKYMVm attenuated TH1 and TH17 differentiation in a dendritic cell (DC)‐dependent manner. WKYMVm‐induced beneficial effects against CIA and WKYMVm‐attenuated TH1 and TH17 differentiation were reversed by cyclosporin H but not by WRW4, indicating a crucial role of FPR1. We also found that WKYMVm augmented IL‐10 production from lipopolysaccharide‐stimulated DCs and WKYMVm failed to suppress TH1 and TH17 differentiation in the presence of anti‐IL‐10 antibody. The therapeutic administration of WKYMVm also elicited beneficial outcome against CIA. Collectively, we demonstrate that WKYMVm stimulation of FPR1 in DCs suppresses the generation of TH1 and TH17 cells via IL‐10 production, providing novel insight into the function of FPR1 in regulating CIA pathogenesis.
In the centre of RA pathogenesis, dendritic cells (DCs) play crucial roles in activation of CD4 lymphocytes by presenting proper T-cell receptor stimulatory and co-stimulatory signalling cues, and context-dependent cytokines, polarizing them into several subsets of CD4 T cells including T H 1 and T H 17. 8 Meanwhile, tolerogenic DC has immuno-suppressive properties and sustain peripheral tolerance by preventing excessive lymphocytes activation with antiinflammatory surface molecules and cytokines such as TGFβ and IL-10. 9 Since the progress of RA is associated with dysregulated activation of T H 1 and T H 17 lymphocytes, it is therefore crucial to identify molecular targets for switching the DC responses to tolerogenic states while suppressing excessive immune activation.
Formyl peptide receptors (FPRs), well-known chemoattractant receptors for leukocyte recruitment, are expressed in diverse immune system 10,11 and can regulate immune cell activation and differentiation. 12 FPRs can recognize a diverse range of agonists that include formyl peptides derived from bacteria or mitochondria and host-derived agonists (serum amyloid A, LL-37) and regulate immune cell response in a ligand-specific manner. 10,13,14 WKYMVm, a surrogate agonist for FPRs, 15,16 shows therapeutic effects against several infectious and inflammatory diseases such as polymicrobial sepsis, ulcerative colitis and respiratory disease, [17][18][19][20][21] implying the important roles of FPRs in immune modulation. Previously, the function of FPRs was investigated in autoimmune arthritis, 22,23 and it was known that serum amyloid A, an another endogenous FPR2 agonist, mediates synovial hyperplasia and angiogenesis via FPR2 of synovial fibroblasts during progress of RA. 24,25 However, the function of FPRs in adaptive immunity remains unclear in autoimmune disease.
In this study, we investigated the roles of FPR in autoimmune disease with a well-known FPR agonist WKYMVm in a CIA mouse model by focusing on DC-mediated CD4 T-cell differentiation. 26 and each score from individuals were combined.
| CIA mouse model
Vehicle (1× phosphate-buffered saline) or WKYMVm, synthesized by Anygen (Gwangju, Korea) with a purity >99.6%, was subcutaneously injected into the CIA mice model daily following the secondary boosting. Cyclosporin H (CsH) (Enzo Life Sciences, Farmingdale, New York, USA) and WRW4 (Anygen, Gwangju, Korea) were subcutaneously injected 30 min before WKYMVm injection. For the therapeutic administration, WKYMVm was subcutaneously injected into the CIA mice model daily after onset of the clinical signs of CIA.
After monitoring CIA, mice were sacrificed at the CIA peak after secondary boosting for analysis.
| Enzyme-linked immunosorbent assay (ELISA)
The levels of IgG1 and IgG2a reactive to immunized collagen in the peripheral blood serum were determined by using a mouse antibovine CII IgG1 and IgG2a antibody assay kit with tetramethylbenzidine (TMB) substrate (Chondrex, Redmond, WA, USA). Cytokine ELISA and antibody detection were carried out according to the manufacturer's instructions.
| Histology of arthritic joints
One leg was randomly selected from each mouse and dissected for histology. The knee joints were collected and fixed in 4% para- ples were dehydrated in 50%, 70%, 90% and 100% ethanol and xylene and mounted by using balsam. They were observed by using DM750 microscope (Leica, Wetzlar, Germany).
| Intracellular cytokine staining and flow cytometry
For the detection of intracellular cytokines, cells were reactivated by PMA (50 ng/ml) and ionomycin (500 ng/ml) (Sigma-Aldrich) with protein transport inhibitor (Thermo Fisher Scientific, Waltham, MA, USA) for 5 h. Cells were blocked by anti-mouse CD16/32 antibodies before surface staining. Surface proteins expressed on cells were stained with fluorescence-conjugated antibodies diluted in FACS buffer (1× phosphate-buffered saline with 0.5% bovine serum albumin) for 30 min. Intracellular cytokine staining was performed using intracellular fixation and permeabilization buffer set (Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer's recom-
| Ex vivo collagen stimulation
Splenocytes were isolated from CIA mice at 35 days after immunization. Cells were stimulated by CII (50 μg/ml) with vehicle or 1 μM
| Generation of bone marrow-derived DCs (BMDCs) and maturation
Mouse bone marrow cells were isolated from 8-to 12-week-old
| Statistical analysis
All results are expressed as the mean ± SEM for the data obtained from the indicated number of experiments. Statistical analysis was performed using Student's t test or two-way ANOVA. A p value ≤ 0.05 was considered statistically significant.
| An immune-modulating peptide, WKYMVm, elicits beneficial effects against CIA
First, we examined the effects of WKYMVm, a surrogate agonist for FPRs, on CIA according to a previous report. 27
| FPR1 mediates WKYMVm-induced beneficial effects against CIA
Previous reports demonstrated that WKYMVm acts on three different FPR members (FPR1, FPR2 and FPR3) in human leukocytes and at least two FPR members (FPR1 and FPR2) in mouse leukocytes. 14,29 In this study, we examined which FPR subtype is involved in the beneficial effects of WKYMVm against CIA by using FPR1 or FPR2 antagonists, CsH or WRW4, respectively. Administration of CsH, an FPR1 antagonist, blocked WKYMVm-elicited beneficial effects against CIA, showing increased paw thickness compared to the WKYMVm-alone group ( Figure 2A). However, WRW4, an FPR2 antagonist, did not affect WKYMVm-induced beneficial effects against CIA (Figure 2A). Histological analysis showed that WKYMVm-induced joint damage reduction was blocked by CsH but not by WRW4 ( Figure 2B). Through H&E and safranin O staining, we also found that WKYMVm-induced cartilage restoration and inhibition of immune cell infiltration were blocked by CsH but not by WRW4 ( Figure 2B). Taken together, our results suggest that WKYMVm-induced beneficial effects against CIA are mediated by FPR1 but not by FPR2.
| WKYMVm inhibits IFNγ-or IL-17Aproducing CD4 T cells via FPR1 in CIA mice
Previously, several cytokines were reported to mediate the pathogenesis of RA. 30 Among these, IFNγ and IL-17 are the major contributors in RA progression. 14,15,31 The major sources of these cytokines are effector CD4 T cells differentiated into T H 1 and T H 17. IFNγ and IL-17A are known to induce the expression of cell-to-cell interaction molecules and activate fibroblast-like synoviocytes to mediate inflammation in the synovium. 32 In this study, we observed that the establishment of CIA in mice could augment the levels of IFNγ + CD4 T cells and IL-17A + CD4 T cells ( Figure 3A,B). We then examined Splenocytes isolated from CIA mice were restimulated by CII and simultaneously treated with vehicle or WKYMVm during activation.
WKYMVm treatment significantly decreased IL-17 and IFNγ production ( Figure 3C). Taken together, WKYMVm effectively blocked CII specific T H 1-and T H 17-mediated immune reactions and this effect was mediated by FPR1.
| WKYMVm-induced decrease of T H 1 and T H 17 cells is dependent on FPR1 expressed on DCs
Since WKYMVm administration elicited beneficial effects against CIA by downregulating T H 1 cells in the spleen (Figures 1 and 3A), in T-cell differentiation was also controlled by DCs presenting CII, BMDCs matured by LPS plus CII were co-cultured with CD4 T cells from CIA mice which were sensitized to CII. As in the previous results, the generation of T H 1 and T H 17 was also suppressed by WKYMVm in co-culture conditions, showing an FPR1 dependency ( Figure 4D). In
conclusion, WKYMVm suppresses T H 1 and T H 17 cell differentiation
in the presence of DCs by working on FPR1 expressed on the surface of DCs, affecting the interaction between the DCs and CD4 T cells.
| WKYMVm further increases IL-10 production from LPS-stimulated DCs which has a role in suppressing T-cell differentiation
DCs are matured in pathogenic conditions, and mature DCs produce several cytokines, present antigens and provide stimulatory signals to T cells. 8 Mature DCs express high levels of surface molecules such as major histocompatibility complex (MHC) and CD80/86 which have a role in DC-to-cell interaction with T cells. 8 Depending on the surrounding environment, DCs can be stimulatory or regulatory, which may stimulate or suppress T-cell activation, respectively, leading to polarization of T cells into various effector or regulatory subtypes. 9 Since we found that WKYMVm suppresses T H 1 and T H 17 cell generation in the presence of DCs, we examined whether WKYMVm can affect regulatory cytokine production in CIA mice.
And we found that administration of WKYMVm significantly increased IL-10 producing DCs ( Figure 5A). We then tested the effects of WKYMVm on the production of IL-10 from mature DCs. Being matured by LPS, DCs produced several cytokines such as IL-10, IL-6, IL-12 and IL-1β. Addition of WKYMVm significantly increased the production of IL-10 from mature DCs, which was inhibited by CsH but not by WRW4 (Figure 5B), suggesting a crucial role of FPR1.
However, WKYMVm did not affect the levels of IL-6, TNFα, IL-12 and IL-1β (data not shown). Furthermore, in the ex vivo stimulation of CII, WKYMVm significantly increased IL-10 from the splenocytes of CIA mice ( Figure 5C). Thus, we next focused on the expected function of IL-10 from DCs to regulate T-cell differentiation in vitro.
Previously, it was reported that IL-10 produced from DCs can suppress T-cell expansion. 9,33,34 Here, we aimed to see whether IL-
| WKYMVm shows therapeutic effects in experimental CIA
We also examined whether WKYMVm shows therapeutic effects against CIA. For this, we administered vehicle or WKYMVm daily after onset of the clinical signs of CIA. As shown in Figure 6A
| DISCUSS ION
In this study, we found that that the immune-modulating peptide DCs can be classified into two distinct subsets, the stimulatory and regulatory (or tolerogenic) types. 9 Regulatory DCs produce IL-10, which show suppressive responses against active immune responses. 9 In this study, we attempted to test the effects of WKYMVm on the stimulatory and tolerogenic phenotype of DCs.
Although WKYMVm failed to suppress the expression of proteins that provide stimulatory signals such as the CD40 ligand (CD40L), MHC II, CD80/86 (data not shown) and any other cytokines, WKYMVm augmented the production of IL-10 in response to LPS from DCs ( Figure 5B). Since IL-10 is the representative cytokine produced by regulatory DCs, 9 and IL-10 produced from DCs can suppress T-cell expansion, 33,34 our results suggest that WKYMVm stimulates the generation of regulatory DCs. In a previous report, we demonstrated that WKYMVm inhibits human monocytederived DC maturation caused by LPS, showing a decrease of IL-12, decrease of CD86/HLA-DR and decrease of allostimulatory activity. 35 Collectively from our previous report and current findings, we suggest that WKYMVm may have more complex effects on DC maturation and differentiation in human monocyte-derived DCs and mouse BMDCs. Another previous report demonstrated that FPR signalling initiated by Cpd43, a dual agonist for FPR1 and FPR2, makes CD4 T cells more apoptotic and inhibits the proliferation of fibroblast-like synoviocytes, then attenuating the CIA mouse RA model via FPR2. 36 The functional role of FPR2 in the regulation of RA pathogenesis was also demonstrated by showing that deletion of annexin A1, an endogenous FPR2 agonist, exacerbates arthritis severity in K/BxN serum-injected mice. 22 However, the functional role of FPR1 on RA pathogenesis and mode of action thereof remained to be resolved.
Since we found that WKYMVm showed beneficial effects against CIA (Figure 1), and WKYMVm is a surrogate agonist for mouse FPR family members such as FPR1 and FPR2, 14 WKYMVm is a useful material to control RA. We also suggest FPR1 as a major target to control DCs for the treatment of autoimmune diseases. The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
v3-fos-license
|
2017-07-16T07:42:51.125Z
|
2012-05-01T00:00:00.000
|
46564724
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/mioc/a/JK73Fqq5Xh4WdCcCtpWqgmL/?format=pdf&lang=en",
"pdf_hash": "4322806d1771eab135dcbeb31afbb7a9b3604e62",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46386",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4322806d1771eab135dcbeb31afbb7a9b3604e62",
"year": 2012
}
|
pes2o/s2orc
|
Changing the epidemiology of carbapenem-resistant Pseudomonas aeruginosa in a Brazilian teaching hospital: the replacement of São Paulo metallo- β -lactamase-producing isolates
In Brazil, carbapenem-resistant Pseudomonas aeruginosa isolates are closely related to the São Paulo metallo-β-lactamase (SPM) Brazilian clone. In this study, imipenem-resistant isolates were divided in two sets, 2002/2003 and 2008/2009, analysed by pulsed field gel electrophoresis and tested for the Ambler class B metallo-β-lactamase (MBL) genes bla SPM-1 , bla IMP and bla VIM . The results show a prevalence of one clone related to the SPM Brazilian clone in 2002/2003. In 2008/2009, P. aeruginosa isolates were mostly MBL negative, genetically diverse and unrelated to those that had been detected earlier. These findings suggest that the resistance to carbapenems by these recent P. aeruginosa isolates was not due to the spread of MBL-positive SPM-related clones, as often observed in Brazilian hospitals.
was used per patient for a total of 73 isolates.The bacterial species were identified by standard biochemical tests.Susceptibility testing was performed by means of the disk-diffusion method with the following antimicrobial agents: imipenem, ceftazidime, ciprofloxacin, amikacin, gentamicin, piperacillin/tazobactam, aztreonam and polymyxin B, in compliance with the Clinical and Laboratory Standards Institute guidelines (CLSI 2010).The minimal inhibitory concentration (MIC) for carbapenems of MBL-negative isolates was determined by the automated BD Phoenix system and interpreted in accordance with CLSI.Selected isolates were screened for MBL production by the disk approximation test as previously described (Arakawa et al. 2000).
Presumptive MBL producers were further tested for the bla SPM-1 , bla IMP and bla VIM genes.Bacterial DNA was extracted by using the Brazol kit (LGC Biotecnologia, Brazil) following the recommendations of the manufacturer and analysed by polymerase chain reaction (PCR) using the primer pairs bla SPM-1 (forward: 5'-CCTACAATCTAACGGCGACC-3', reverse: 5'-TCGCCGTGTCCAGGTATAAC-3'), bla IMP (forward: 5'-GGAATAGAGTGGCTTAATTCTC-3', reverse: 5'-GTGATGCGTCYCCAAYTTCACT-3') and bla VIM (forward: 5'-TGCGCATTCGACCGACAATC-3', reverse: 5'-GTCGAATGCGCAGCACCAGG-3') (Migliavacca et al. 2002, Gales et al. 2003, Toleman et al. 2005).Positive controls for the P. aeruginosa bla SPM-1 , bla IMP and bla VIM genes were kindly provided by Special Clinical Microbiology Laboratory and ALERTA Laboratory (São Paulo, Brazil).The amplicons were purified with the aid of a PCR purification kit (Promega Co, USA) and submitted to DNA sequencing by the platform of the Aggeu Magalhães Research Centre, Oswaldo Cruz Foundation, Recife.The nucleotide sequences were evaluated with the BioEdit TM program and analysed by on-line BLASTn at GenBank dataset (National Centre for Biotechnology and Information).In addition, all of the isolates were genotyped by DNA macrorestriction followed by pulsedfield gel electrophoresis (PFGE) using the endonuclease SpeI.A representative isolate of the SPM Brazilian clone (from São Paulo Hospital) and the sequenced strain PA01 (kindly provided by the Pseudomonas Genome Project, Boston, MA, USA) were included as a reference point.
Clonal relationships among the isolates were established using the criteria of Tenover et al. (1995).
The isolates from the present work revealed an important change in the epidemiology of carbapenem-resistant P. aeruginosa isolates between the periods 2002-2003 and 2008-2009, showing a decreasing prevalence of the epidemic SPM-1-producing clone in the final period of bacterial recovery (Table ).Moreover, the antimicrobial susceptibility test revealed that these bacteria were co-resistant to many other anti-Pseudomonas drugs, particularly the most recent strains (2008/2009).On the other hand, all of the bacterial samples were susceptible to polymyxin B (Table ).
A high prevalence of the MBL phenotype was observed among the 2002/2003 isolates (98.4%) (Table ).This coincided with a higher incidence of bla SPM-1 than those found in previous studies conducted on a national scale (Sader et al. 2004, Gräf et al. 2008).Nevertheless, this high incidence of the bla SPM-1 gene decreased in the 2008/2009 isolates (Table ), suggesting that carbapenemresistance mechanisms other than MBL must be present and are being spread in the hospital under study.Moreover, these other mechanisms, such as efflux pump over expression (associated or not with porin down-regulation), may also be involved in the overall increased resistance to anti-Pseudomonas drugs observed in 2008/2009.None of the isolates showed amplification of the bla IMP or bla VIM gene (data not shown).Thus, two presumptive MBL producers, from 2002/2003, did not carry any of the MBL genes tested.As expected, none of the 10 MBLnegative isolates indicated the presence of MBL genes.
Molecular typing indicated the prevalence of bacterial isolates (herein designated as genotype A) closely re- gene belonged to the clonal pattern A. Interestingly, the increase in bacterial variation was also accompanied by an increase in bacterial resistance.It is noteworthy that the MIC values for imipenem and meropenem were not high among the more recent isolates (MIC > 8 μg/mL), which corroborates the hypothesis that there are alternative resistance mechanisms.This is supported by the fact that MBL enzymes increase antimicrobial MICs more effectively than does either efflux pump over-expression or porin down-regulation alone (Xavier et al. 2010).
In conclusion, the population of carbapenem-resistant P. aeruginosa in the hospital under study was replaced by MBL-negative, genetically unrelated bacterial isolates.This finding emphasises the need for continuous surveillance strategies and an improvement of the infection control measures in this institution.
TABLE Microbiological
(Gales et al. 2003)05, Fonseca et al. 2010997A, which was widely disseminated in the 2002/2003 group of isolates (Table).The existence of common PFGE types among carbapenem-resistant P. aeruginosa isolates from distinct geographical locations has been reported and indicates clonal dispersion(Zavascki et al. 2005, Fonseca et al. 2010).In Brazil, there have been previous reports of the spread of a unique SPM-type MBL-positive clone(Gales et al. 2003).As expected, MBL-negative isolates from 2008/2009 were unrelated to the epidemic Brazilian clone and showed six distinct PFGE types (Table).Thus, new clones could be responsible for the dissemination of other resistance mechanisms to carbapenems.The three 2008/2009 MBL-positive isolates that carried the bla SPM-1 lated
|
v3-fos-license
|
2018-04-03T04:32:57.453Z
|
2014-03-18T00:00:00.000
|
39430614
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/289/20/14239.full.pdf",
"pdf_hash": "ad51dc3b1d805585acdf49f4387ba369ed85706c",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46389",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "658c17cf39ef1427dc0d965b92c69d96518e3ffa",
"year": 2014
}
|
pes2o/s2orc
|
Musashi Protein-directed Translational Activation of Target mRNAs Is Mediated by the Poly(A) Polymerase, Germ Line Development Defective-2*
Background: Although Musashi mediates target mRNA polyadenylation, the underlying molecular mechanism has not been elucidated. Results: Germ line development defective-2, a poly(A) polymerase, associates with Musashi and is necessary and sufficient for Musashi-directed polyadenylation. Conclusion: Germ line development defective-2 mediates Musashi-dependent mRNA translation. Significance: Germ line development defective-2 couples Musashi to polyadenylation and translational activation of target mRNAs. The mRNA-binding protein, Musashi, has been shown to regulate translation of select mRNAs and to control cellular identity in both stem cells and cancer cells. Within the mammalian cells, Musashi has traditionally been characterized as a repressor of translation. However, we have demonstrated that Musashi is an activator of translation in progesterone-stimulated oocytes of the frog Xenopus laevis, and recent evidence has revealed Musashi's capability to function as an activator of translation in mammalian systems. The molecular mechanism by which Musashi directs activation of target mRNAs has not been elucidated. Here, we report a specific association of Musashi with the noncanonical poly(A) polymerase germ line development defective-2 (GLD2) and map the association domain to 31 amino acids within the C-terminal domain of Musashi. We show that loss of GLD2 interaction through deletion of the binding domain or treatment with antisense oligonucleotides compromises Musashi function. Additionally, we demonstrate that overexpression of both Musashi and GLD2 significantly enhances Musashi function. Finally, we report a similar co-association also occurs between murine Musashi and GLD2 orthologs, suggesting that coupling of Musashi to the polyadenylation apparatus is a conserved mechanism to promote target mRNA translation.
The mRNA-binding protein, Musashi, has been shown to regulate translation of select mRNAs and to control cellular identity in both stem cells and cancer cells. Within the mammalian cells,
Musashi has traditionally been characterized as a repressor of translation. However, we have demonstrated that Musashi is an activator of translation in progesterone-stimulated oocytes of the frog Xenopus laevis, and recent evidence has revealed Musashi's capability to function as an activator of translation in mammalian systems. The molecular mechanism by which Musashi directs activation of target mRNAs has not been elucidated. Here, we report a specific association of Musashi with the noncanonical poly(A) polymerase germ line development defective-2 (GLD2) and map the association domain to 31 amino acids within the C-terminal domain of Musashi. We show that loss of GLD2 interaction through deletion of the binding domain or treatment with antisense oligonucleotides compromises Musashi function. Additionally, we demonstrate that overexpression of both Musashi and GLD2 significantly enhances Musashi function. Finally, we report a similar co-association also occurs between murine Musashi and GLD2 orthologs, suggesting that coupling of Musashi to the polyadenylation apparatus is a conserved mechanism to promote target mRNA translation.
Growing evidence has revealed the cell's ability to control translation of specific subpopulations of mRNAs both spatially and temporally. A common theme of targeted translational control is regulation of 3Ј poly(A) tail length, wherein a short poly(A) tail results in translational inactivation, and long poly(A) tail promotes translational activation (1)(2)(3). This form of translation control has been primarily studied in oocytes of the frog Xenopus laevis. Immature (Dumont stage VI) oocytes are arrested in late G 2 and, in response to progesterone stimulation, resume the cell cycle and proceed into meiosis. Once stimulated, the oocyte nucleus (germinal vesicle) breaks down (GVBD), 2 which is marked by the appearance of a white spot on the cell's animal pole, and the oocyte continues to metaphase of meiosis II, at which point it is mature and competent to be fertilized (4).
The oocyte maturation process (resumption of the cell cycle and progression through meiosis) involves a highly regulated signaling cascade, which is dependent upon synthesis of new proteins such as the Mos proto-oncogene, B-type cyclins, and Musashi (5)(6)(7)(8)(9)(10)(11). Because transcription is suppressed during maturation, the oocyte controls synthesis of critical proteins through selective translation of maternally derived mRNAs, in a strict temporal order (9,(11)(12)(13)(14)(15)(16)(17)(18). This dependence on mRNA translation and lack of interference from transcriptional pathways make oocyte maturation an excellent model system to study the mechanisms of translational control (19).
Central to control of the signaling cascade that leads to GVBD are the translational regulators Pumilio, Musashi, and the cytoplasmic polyadenylation element-binding protein 1 (CPEB1). Activating these factors occurs through a sequential hierarchy and results in the translational activation of target mRNAs controlled by each respective factor in a temporally orchestrated manner (20). Pumilio mediates repression of Ringo until progesterone stimulation leads to dissociation of Pumilio from the Ringo mRNA, resulting in Ringo translation (21,22). Ringo then activates free cyclin-dependent kinase subunits, triggering phosphorylation of several targets, including Musashi (23). Musashi phosphorylation triggers the cytoplasmic polyadenylation and translation of several target mRNAs, including those encoding Mos, cyclin B5, and Nrp1A/B (Musashi-1) prior to GVBD (10,(23)(24)(25). Subsequently, Mosdependent activation of MAPK signaling primes CPEB1 for activation (26), leading to de-repression and translational activation of CPE-dependent mRNAs, including those encoding cyclin B1, Wee1, and CPEB4 (27)(28)(29)(30). At GVBD, activation of the maturation-promoting factor (cyclin B/cyclin-dependent kinase) results in partial degradation of CPEB1 and functional substitution of CPEB1 with CPEB4 (30). CPEB1 activation appears to be mediated by several mechanisms depending upon the repression complex assembled on the mRNA (31)(32)(33). In immature stage VI oocytes, the atypical poly(A) polymerase GLD2 is found in a complex with CPEB1 and the deadenylase PARN. Progesterone-stimulated maturation triggers phosphorylation of CPEB1 and expulsion of PARN, resulting in the "unopposed" polyadenylation of CPE-containing mRNAs by GLD2 (34,35).
Although a fairly detailed understanding of Pumilio and CPEB1 function has emerged, the mechanism(s) by which Musashi directs translational activation of target mRNAs in response to progesterone is unknown. Musashi does not appear to mediate repression of the Musashi-binding element (MBEcontaining mRNAs in immature oocytes) but does mediate both cytoplasmic polyadenylation and translational activation in response to progesterone stimulation (24,25,36). In this study, we identify GLD2 as a Musashi-interacting factor that contributes to cytoplasmic polyadenylation of MBE target mRNAs prior to GVBD. GLD2 associates with Musashi-1 and Musashi-2 in both immature and progesterone-stimulated oocytes. The interaction of GLD2 with Musashi appears to be functional as assessed by several criteria. First, deletion of the region containing the GLD2 interaction domain compromises the ability of Musashi to mediate cell cycle progression. Second, inhibition of GLD2 synthesis with antisense oligonucleotides ablates cell cycle progression and attenuates Musashi-directed mRNA cytoplasmic polyadenylation. Finally, overexpression of GLD2 and Musashi exerts a synergistic acceleration of Mos mRNA polyadenylation and oocyte maturation. We also demonstrate interaction between the mammalian orthologs of GLD2 and Musashi-1, suggesting that GLD2 interaction with Musashi is a conserved mechanism to direct cytoplasmic polyadenylation and activation of Musashi target mRNA translation.
EXPERIMENTAL PROCEDURES
Oocyte Culture and Microinjections-Dumont Stage VI immature Xenopus laevis oocytes were isolated and cultured as described previously (37). Oocytes were micro-injected using a Nanoject II Auto-Nanoliter Injector (Drummond Scientific). mRNA for oocyte injection was made by linearization of the plasmid and in vitro transcription using SP6 (Promega) or T7 (Invitrogen) RNA polymerase as appropriate. Oocytes were stimulated to mature with 2 g/ml progesterone. The appearance of a white spot on the animal pole was used to score the rate of GVBD. Where indicated, progesterone-stimulated oocytes were segregated when 50% of the oocytes completed GVBD (GVBD 50 ) into those that had not (Ϫ) or had (ϩ) completed GVBD. In the event of ambiguous morphology, oocytes were fixed for 10 min in ice cold 10% trichloroacetic acid and dissected for the presence or absence of a germinal vesicle. Animal protocols were approved by the UAMS Institutional Animal Care and Use committee, in accordance with Federal regulations.
Oocyte Lysis and Sample Preparation-Oocytes were lysed in 10 l/oocyte of ice cold Nonidet P-40 lysis buffer (1% Nonidet P-40, 20 mM Tris, pH 8.0, 137 mM NaCl, 10% glycerol, 2 mM EDTA, 50 mM NaF, 10 mM sodium pyrophosphate, 1 mM PMSF, 1ϫ protease inhibitor (Thermo Scientific)). Yolk and cell debris were removed by centrifugation at 12,000ϩ rpm for 10 min in a refrigerated tabletop centrifuge. Where required, a portion of the lysate was transferred immediately following lysis to STAT-60 (Tel-Test, Inc) for RNA extraction using the manufactures protocol followed by a subsequent purification by precipitation in 4 M LiCl at Ϫ80°C for 30 min and centrifugation at 12,000 rpm for 10 min in a refrigerated tabletop centrifuge.
Pulldown and RNase Treatment-Oocytes were injected with 57.5 ng of each in vitro transcribed mRNA and incubated for 16 h at 18°C. Lysates were prepared as described above. 300 l of oocyte lysate was added to 450 l ice cold Nonidet P-40 lysis buffer and incubated with 50 l of 50% glutathione-Sepharose conjugated bead slurry (GE) at 4°C for 6 h with gentle rotation. Beads were then gently pelleted by centrifugation at 500 rpm for 5 min; the supernatant was removed and replaced with 500 l fresh Nonidet P-40 lysis buffer. This process was repeated 3 times. On the third wash, 200U of RNase1 (Ambion) was added and incubated for 15 min at 37°C. Following final centrifugation, all Nonidet P-40 was removed and 50 l of LDS sample loading buffer (Invitrogen) was added. Beads were incubated for 10 min at 70°C, then crushed by centrifugation at 12,000 rpm for 10 min. Finally, 45 l of the sample was loaded per each lane of a 10% NuPAGE gel (Invitrogen) and electrophoresed.
Western Blotting-For each lane, half-oocyte equivalents of lysate were prepared in NuPAGE LDS sample loading buffer and electrophoresed through a 10% NuPAGE gel then transferred to a 0.2 m-pore-size nitrocellulose membrane (Protran; Midwest Scientific). The membrane was blocked with 5% nonfat dried milk in TBST (20 mM Tris, pH 7.5, 150 mM NaCl, 0.05% Tween 20) for 60 min at room temperature, or overnight at 4°C. Following incubation with primary antibody, filters were washed three times for 10 min in TBST, incubated with horseradish peroxidase conjugated secondary antibody then washed 3 ϫ 10 min in TBST. Blot were developed using enhanced chemiluminescence in a Fluorchem 8000 Advanced Imager (Alpha Innotech Corp.). Western blots were quantified using Fluorchem FC2 software (Alpha Innotech Corp.).
Antibodies-The HA antibody (Cell Signaling) was used at 1:1000. The GST (Santa Cruz Biotechnology) and GFP (Invitrogen) antibodies were used at 1:5000. The Tubulin antibody (Sigma) was used at 1:10,000. The Xenopus GLD2 antibody was a generous gift from Dr. Marvin Wickens and used at 1:1000. All working antibody preparations were made in TBST ϩ 0.5% nonfat milk.
TABLE 1 Plasmid construction
For PCR fragment generation, the template was subjected to PCR amplification using the indicated primers. Resulting PCR fragments were then purified using agarose gel electrophoresis followed by cleanup using a QIAquick gel extraction kit (Qiagen). Next, the fragments and destination vector were digested using the indicated restriction enzymes and again purified and cleaned up using gel electrophoresis and the QIAquick kit. The fragment and vector were then ligated using the T7 Quick Ligase (New England Biolabs). Finally, the ligated fragment/vector was used to transform competent DH5-␣ Escherichia coli. For PCR-directed mutagenic deletion, the template was subjected to PCR amplification of the entire plasmid. Primer sequence "looped out" the desired sequence for deletion. PCR-directed stop codon insertion is the same as the PCR-directed mutagenic deletion, except primers directed insertion of a stop codon rather than deletion.
Polyadenylation Assays-cDNAs for polyadenylation assays were synthesized using RNA ligation-coupled PCR as described previously (38). The increase in PCR product length is specifically due to extension of the poly(A) tail (36,38). The same reverse primer was used for all reactions and has the sequence: 5Ј-GCTTCAGATCAAGGTGACCTTTTT-3Ј. The Mos forward primer has the sequence: 5Ј-GCAAGGATATGAAAAA-AAGATTTC-3Ј. The Cyclin B1 primer has the sequence: 5Ј-GTGGCATTCCAATTGTGTATTGTT-3Ј.
Antisense Oligodeoxynucleotide Injections and Rescue-Antisense oligodeoxynucleotide 5Ј-TCCCTCGTCGCTTCT-CCTCTTTCTGT-3Ј was designed to target endogenous XGLD2-a and XGLD2-b mRNAs. Antisense oligodeoxynucleotides targeting Xenopus Musashi-1 and Musashi-2 were previously described (24). Control oocytes were injected with randomized oligonucleotide with the sequence 5Ј-TAGAGA-AGATAATCGTCATCTTA-3Ј (12). A total of 100 ng of antisense oligonucleotides was injected for each condition and oocytes were incubated at 18°C for 16 h. For Musashi rescue assays, antisense injected oocytes were subsequently injected with 23 ng RNA encoding wild-type Musashi-1 or a deletion mutant Musashi-1, as indicated. The oocytes were then incubated for 1 h at room temperature to allow expression of the protein, then stimulated to mature with progesterone.
Statistical Analysis-All quantitated data are presented as the mean Ϯ S.E. Statistical significance was assessed by one way Analysis of Variance followed by the Bonferroni post hoc test or by Student's t test when only two groups were compared. A probability of p Ͻ 0.05 was adopted for statistical significance.
RESULTS
Musashi Specifically Associates with the Poly(A) Polymerase, GLD2-Musashi has been previously demonstrated to regulate the polyadenylation status of target mRNAs (10,24,25). However, Musashi itself is not a poly(A) polymerase. We therefore hypothesized that Musashi must recruit a poly(A) polymerase to the 3Ј UTR of mRNAs containing a MBE. The noncanonical poly(A) polymerase GLD2, which does not bind target mRNAs directly (42), has been previously suggested to mediate CPEdirected cytoplasmic polyadenylation (34). Further, GLD2 has been reported to associate with the Mos mRNA (39), which we have previously shown to be under Musashi-directed control (25). We therefore hypothesized that GLD2 may mediate Musashi-directed polyadenylation. To test this idea, we used a co-association assay to determine whether Musashi-1 and GLD2 associate in oocytes. Briefly, immature Stage VI oocytes were co-injected with mRNA encoding GST-Musashi-1 and HA-GLD2. Oocytes were then allowed to express the proteins, lysed, subjected to GST pulldown in the presence of RNase1 (to eliminate proteins that simply co-associate by virtue of interacting with the same mRNA) and specific interacting proteins detected by Western blotting. A GLD2-specific band was identified as a co-associating protein in GST-Musashi-1-injected oocytes, but not in oocytes injected with the GST moiety alone (Fig. 1A, arrowhead). As a positive control, GLD2 association was similarly detected with GST-CPEB1 (Fig. 1A). We also injected the mRNA encoding GST-Musashi-1 or GST, and endogenous GLD2 interaction was detected using a GLD2 specific antibody (Fig. 1B, arrowhead). No GLD2 association was detected with the GST moiety alone.
To determine whether GLD2 and Musashi-1 remain associated during maturation, when Musashi is known to actively direct polyadenylation of target mRNAs, we co-injected oocytes with mRNA encoding GST-Musashi-1 and HA-GLD2 as before. Following incubation, the oocytes were either left untreated or stimulated with progesterone before lysis. When 50% of the oocyte population reached GVDB (GVBD 50 ), they were sorted into those oocytes that had completed GVBD and those which had not. This sorting of oocytes was critical because Musashi has been shown to direct progesterone-stimulated polyadenylation prior to GVBD (24,25). Following GST pulldown and visualization by Western blotting, GLD2 was FIGURE 1. Musashi-1 associates specifically with the noncanonical poly(A) polymerase, GLD2. A, oocytes were co-injected with mRNA encoding HA-GLD2 and either GST-XMsi1, GST-CPEB1, or GST. The injected oocytes were incubated overnight to express the introduced proteins and then lysed. Lysates were then subjected to GST pulldown and treatment with RNase I. Associations were visualized by Western blotting. GST-XMsi1 and GST-CPEB1 associate with HA-GLD2 in an RNase I-independent manner, although the GST tag alone does not (arrowhead). UI, uninjected oocytes. B, oocytes were injected with GST-XMsi1 or GST and allowed to express the protein before lysis and pulldown. An antibody targeting endogenous GLD2 detects GLD2 associating with GST-XMsi1 but not GST (arrowhead). UI, uninjected oocytes. C, oocytes were co-injected with mRNA encoding HA-GLD2 and either GST-XMsi1 or GST. Following incubation, oocytes were stimulated to mature with progesterone. When 50% of oocytes reached GVBD, lysate was made using immature (I) and progesterone-stimulated oocytes pre-GVBD (Ϫ) and post-GVBD (ϩ). HA-GLD2 associates with GST-XMsi1 in immature and progesterone-stimulated oocytes (arrowhead). UI, uninjected oocytes. A representative experiment is shown, and the composite results of three independent experiments are shown graphically (right panel). D, oocytes were injected with mRNA encoding GFP-XMsi1 and GST-XPARN or GST. Oocytes were then treated as in C. XMsi1 associates with PARN in immature and progesterone-stimulated oocytes (arrowhead). A representative experiment is shown, and the composite results of three independent experiments are shown graphically (right panel). E, oocytes were injected with mRNA encoding GFP-XCPEB1 and GST-XPARN or GST. Oocytes were then treated as in C and D. As described previously, cytoplasmic polyadenylation element-binding protein dissociates from PARN after progesterone addition. A representative experiment is shown.
found to associate with GST-Musashi-1 in progesterone-stimulated oocytes prior to GVBD (Fig. 1C). When normalized for the amount of GST-Musashi-1 recovered in each experiment, no significant difference in GLD2 association with Musashi was observed between immature and progesterone-stimulated oocytes (Fig. 1C, Graph). No GLD2 association was detected with the GST moiety alone. We conclude that GLD2 specifically interacts with Musashi-1 in both immature and progesterone-stimulated oocytes.
Previous analyses of GLD2 interaction with CPEB1 have suggested that GLD2 function is opposed in immature oocytes by the presence of the deadenylase PARN within the CPEB1 complex. Progesterone stimulation resulted in phosphorylation of CPEB1 and expulsion of PARN, leading to the unopposed action of GLD2 on CPE-containing mRNAs (35). We therefore sought to determine whether PARN associated with Musashi-1 in immature oocytes and was similarly expelled in response to progesterone. We found that PARN can associate in an RNaseinsensitive manner with Musashi-1 in immature oocytes and we do not observe dissociation of PARN from Musashi-1 complexes in response to progesterone stimulation (Fig. 1D). By contrast, significant dissociation of PARN from CPEB1 was observed after progesterone stimulation (Fig. 1E), consistent with an earlier study (35). These results indicate that PARN interaction with CPEB1 and Musashi-1 is differentially regulated in progesterone-stimulated oocytes.
Deletion Mapping of the GLD2 Binding Domain within the Musashi Protein-As an initial step toward testing the functional significance of the Musashi/GLD2 association, we sought to map domain(s) of Musashi-1 required for association with Xenopus GLD2 protein. A series of deletion mutants of the Musashi-1 ( Fig. 2A) and CPEB-1 (Fig. 2B) proteins were generated and tested for GLD2 co-association in the GST-pulldown assay (Fig. 2C). The GLD2 minimal interaction domain appears to reside within amino acids 190 -220, which lie C-terminal to the two RNA recognition motif domains. This 31-amino acid domain was sufficient for GLD2 to interact with Musashi-1 when fused to GST (Fig. 2, A and C, XMsi1 190 -220). The Xenopus Musashi-2 protein was also able to co-associate with GLD2, as were the mammalian Musashi-1 and Musashi-2 proteins (Fig. 2, A and C). We conclude that GLD2 interacts within the C-terminal region of Musashi and that this interaction appears to be evolutionarily conserved for both the Musashi-1 and Musashi-2 isoforms.
Because the GLD2 interaction domain on CPEB1 has not been previously determined, deletions of the CPEB1 protein were tested for association with GLD2. We found that the GLD2 protein associates with the N-terminal half of the CPEB1 protein (Fig. 2, B and C). However, no obvious linear consensus sequence was found between this portion of CPEB1 and the 190 -220 domain of Musashi-1.
Interaction of GLD2 with Musashi Promotes Target mRNA Polyadenylation and Cell Cycle Progression-To test the functional significance of the Musashi:GLD2 interaction, we compared the ability of wild-type and GLD2 interaction-defective mutant Musashi proteins to mediate cell cycle progression in response to progesterone stimulation. In this assay, endogenous Musashi function is specifically abrogated by injection of anti- sense oligonucleotides targeting both endogenous Musashi-1 and Musashi-2 mRNAs but not by scrambled, control oligonucleotides (24). As expected, the Musashi antisense oligonucleotides inhibited progesterone-stimulated GVBD (Fig. 3, A and B, No Rescue). Re-injection of RNA encoding GST fused to wildtype Xenopus Musashi-1 (Fig. 3A) or murine Musashi-1 (Fig. 3B) rescued progesterone-stimulated oocyte maturation as expected (24). By contrast, injection of similar levels of deletion mutants lacking the GLD2 interaction domain in either the Xenopus Musashi-1 (Fig. 3A, XMsi1 ⌬) or murine Musashi-1 (Fig. 3B, mMsi1 ⌬) demonstrated a significantly reduced ability to mediate timely progression through oocyte maturation (assessed as % GVBD at the time the wild-type Musashi-1 rescue achieved GVBD 50 ). Taken together, the data indicate that deletion of the GLD2 interaction domain attenuated Musashidirected cell cycle progression.
To complement the Musashi deletion analyses, we sought to inhibit endogenous GLD2 function through the use of antisense oligonucleotide injection. In contrast to control oligonucleotides, injection of GLD2 antisense oligonucleotides targeting both the GLD2a and GLD2b isoforms significantly attenuated progesterone-stimulated oocyte maturation (Fig. 4A). The Musashi-1/2 antisense oligonucleotides were used as a positive control for inhibition of GVBD. These findings compliment an earlier report where depletion of GLD2 using neutralizing antisera also blocked cytoplasmic polyadenylation (34). As expected, the GLD2 antisense oligonucleotides, but not control scrambled oligonucleotides, led to the selective cleavage of the endogenous GLD2 mRNA (Fig. 4B). Next, the polyadenylation status of the Musashi target mRNA, Mos, was examined. Treatment with Musashi-1/2 antisense caused a deadenylation of 20 nucleotides in immature oocytes, which was restored (but not subsequently extended) after progesterone addition (Fig. 4, C and D). GLD2 antisense-treated oocytes did not show any deadenylation but did show significantly attenuated polyadenylation (by ϳ30 adenylate residues) compared with control oocytes after progesterone treatment (Fig. 4, C and D). Indeed, even in the few oocytes that were able to complete GVBD, GLD2 antisense-treated oocytes showed a dramatic attenuation of the poly(A) tail extension. We conclude that inhibition of endogenous GLD2 synthesis dramatically attenuates oocyte maturation and Musashi-dependent cytoplasmic polyadenylation of target mRNAs.
Further support for a functional interaction between Musashi-1 and GLD2 was obtained from overexpression studies. Injection of RNA encoding GLD2 or GST-Musashi-1 alone did not affect the rate of progesterone-stimulated oocyte maturation (Fig. 5A). However, co-injection of GLD2 with GST-Musashi-1 resulted in a 1.83 Ϯ 0.25-fold increase in the rate of maturation (Fig. 5A). Protein expression from the injected RNAs was verified by Western blotting (Fig. 5B). When the polyadenylation status of the endogenous Mos mRNA was assessed, co-expression of GLD2 and GST-Musashi-1 resulted in an acceleration of Mos mRNA polyadenylation by ϳ1 h (Fig. 5, C and E), compared with water-injected controls. Importantly, the polyadenylation of the late class cyclin B1 mRNA still occurred after completion of GVBD in oocytes co-expressing Musashi-1 and GLD2, so the relative temporal order of maternal mRNA translational activation was maintained. Additionally, the absolute length of the Mos mRNA poly(A) tail was not significantly altered, rather the time to initiation of polyadenylation was advanced. The acceleration in onset of polyadenylation of the Mos mRNA was not seen when Musashi-1 or GLD2 were overexpressed separately (Fig. 5D). We conclude that co-expression of GLD2 and Musashi-1 accelerates the time of initiation of polyadenylation of Musashi target mRNAs.
Mammalian Orthologs of Musashi and GLD2 Co-associate-We finally asked if the murine Musashi and murine GLD2 proteins interact. We co-injected oocytes with GFP-tagged mouse GLD2 and GST-tagged mouse Musashi-1 or Musashi-2 or the GST moiety alone. Although GLD2 association with Musashi-1 (Fig. 6A) was detected, we did not see significant association between Musashi-2 and GLD2 (Fig. 6A). This finding indicates that mammalian Musashi-1 and GLD2 are capable of associating and that coupling of Musashi-1 with polyadenylation machinery may be an evolutionarily conserved mechanism of targeted polyadenylation and translational activation.
DISCUSSION
In this study, we provide the first evidence that Musashi can interact with the GLD2 poly(A) polymerase. The GLD2 enzyme is an atypical poly(A) polymerase in that it lacks inherent RNA binding function and is recruited to target mRNAs by association with sequence-specific RNA-binding proteins (34,39,42,43). Interaction of GLD2 with Musashi serves to selectively recruit GLD2 to a subset of cellular mRNAs that possess Musashi-binding elements. GLD2 interaction with Musashi is RNase-insensitive, indicating that the interaction does not occur by simple co-occupancy of GLD2 and Musashi proteins on the same mRNA. Rather, the interaction occurs via proteinprotein association in an mRNA-independent manner. As such, our findings link Musashi directly with the 3Ј end processing machinery that controls modification of the poly(A) tail in the cytoplasm to stimulate translational activation of select mRNA species in response to extracellular stimulation.
Our findings complement prior reports linking GLD2 to CPE-dependent mRNA polyadenylation via recruitment through interaction with CPEB1 (34,35). Both GLD2 and PARN are present in CPEB1 complexes in immature oocytes. In this context, PARN is thought to exert a dominant effect and thereby override GLD2 activity to maintain short poly(A) tails and translational dormancy. Upon progesterone stimulation, CPEB1 undergoes phosphorylation of Ser-174, which triggers expulsion of PARN from the complex and leads to the unopposed action of GLD2 and consequent extension of poly(A) tails of CPE-regulated mRNAs (23). We see a similar co-association of Musashi with both GLD2 and PARN in immature oocytes. However, upon progesterone stimulation and phosphorylation of Musashi on Ser-297 and Ser-322, which are critical for translational activation of Musashi target mRNAs (23), we do not observe expulsion of PARN from the Musashi complex (Fig. 1D). We do observe PARN dissociation from CPEB1 (Fig. 1E) implying that the mechanism of MBE-dependent mRNA polyadenylation and translational activation differs from that proposed for CPEB1 and does not appear to require PARN expulsion. At this juncture, it is not clear how the complex is remodeled to allow Musashi-directed mRNA cytoplasmic polyadenylation in response to progesterone stimulation. Presumably, PARN function is in some way attenuated within the Musashi complex and/or GLD2 activity is promoted, resulting in a net gain in poly(A) tail length. Interestingly, the continued presence of PARN in Musashi complexes in progesteronestimulated oocytes may explain the deadenylation of certain Musashi target mRNAs that lack CPE sequences after GVBD (25,36). Future studies will be required to directly assess the regulation of GLD2 and PARN within Musashi complexes assembled upon MBE-controlled mRNAs.
Our data suggest a functional requirement for Musashi interaction with GLD2. Expression of wild-type Musashi-1 can rescue cell cycle progression in Musashi antisense-injected oocytes (24), whereas expression of Musashi mutant proteins encoding a deletion spanning the GLD2 interaction domain were significantly attenuated in their functional capability to rescue cell cycle progression. Furthermore, injection of antisense oligonucleotides targeting GLD2 specifically attenuated both oocyte maturation and polyadenylation of early class mRNAs in response to progesterone.
Although a role for GLD2 in Musashi function is clearly indicated, Musashi activity is not eliminated in the GLD2 interaction mutant. The residual Musashi-1 function may explained by several nonexclusive possibilities that include a second weak GLD2 interaction domain elsewhere on Musashi-1 that is not detected in our pulldown experiments, recruitment of an alternative poly(A) polymerase during maturation, or involvement of a potential polyadenylation-independent mechanism of Musashi-mediated translational activation (25).
Although our results are consistent with a requirement for Musashi to interact with GLD2 to mediate maternal mRNA translational activation, we cannot exclude the possibility that additional proteins may also interact with the same domain on the Musashi protein and thereby contribute to translational activation. For example, we note that a 44-amino acid domain of the mammalian Musashi-1 protein, including the GLD2 interaction region identified here (Fig. 2), has been reported to be necessary for repression of target mRNAs and association with the poly(A)binding protein in rabbit reticulocyte lysate and HEK293T cells (44). However, the embryonic poly(A)-binding protein is the predominant poly(A)-binding protein in oocytes (45)(46)(47), and the function of the embryonic poly(A)-binding protein, if any, in Musashi-directed translational control is unclear. At this juncture, our data do not distinguish between indirect or direct interaction between Musashi and GLD2. Nonetheless, co-injection of both Musashi-1 and GLD2 acts synergistically to promote target mRNA translation and acceleration of progesterone-stimulated maturation, suggesting that GLD2 is a critical component of the Musashi translation promoting complex.
Deletion mapping localized the interaction domain to a 31amino region within the C-terminal half of the protein (amino acids 190 -220). This region lies outside of the two N-terminal RNA recognition motifs. Expression of the minimal region identified by deletion mapping fused to GST was sufficient to mediate interaction with GLD2 when expressed in oocytes (Fig. 2). The GLD2 interaction was also detected with the related Xenopus Musashi-2 isoform. In addition, both mammalian Musashi-1 and Musashi-2 demonstrated interaction with Xenopus GLD2 when ectopically expressed in the Xenopus oocyte. The 31-amino acid region is Ͼ73% identical between Xenopus Musashi-1 and Musashi-2 isoforms. CPEB1 interaction with GLD2 occurs via the N-terminal CPEB1 domain (Fig. 2, B and C); however, alignment between the conserved 31-amino acid region of Xenopus Musashi-1 and Musashi-2 with CPEB1 failed to identify an obvious region of homology. We suspect that the GLD2 interaction domain on target proteins may involve structural folding rather than a simple dependence upon a linear amino acid sequence. The region within GLD2 necessary for interaction with Caenorhabditis elegans GLD3, an RNA-binding protein, has been determined (42), but it is unclear if this domain mediates interaction with RNA-binding proteins in vertebrate cells. Further studies will be required to definitively map the GLD2 interaction domains within CPEB1 and Musashi to better define a consensus GLD2 target interaction motif.
Consistent with a conserved interaction of Musashi proteins and GLD2, we observe co-association of the mouse orthologs of GLD2 and Musashi-1. This finding suggests that mammalian Musashi-1 may employ a similar strategy as seen in oocytes to promote target mRNA translation. We did not, however, observe any significant interaction between mouse Musashi-2 and GLD2 (Fig. 6A), suggesting a Musashi isoform-specific interaction. Indeed, precedent for Musashi-1-and Musashi-2specific protein associations has been suggested previously (48,49). It is possible that mammalian Musashi-2 may not mediate polyadenylation. However, our recent work indicates that both mouse and human Musashi-2, like the Xenopus Musashi-2, do promote polyadenylation in Musashi antisense-treated Xenopus oocytes. 3 Thus, an alternative possibility is that Musashi-2-directed polyadenylation of target mRNAs in mammalian cells may involve a different poly(A) polymerase.
Although Musashi proteins have been primarily implicated in promoting mammalian stem and progenitor cell self-renewal and proliferation via repression of target mRNAs (e.g. m-Numb and p21) (50 -53), recent findings have emerged demonstrating that Musashi switches from a repressor to an activator of target mRNA translation in response to mammalian neural stem cell differentiation cues (54). A role for Musashi-mediated activation has also been proposed for Robo3/Rig-1 mRNA translation during midline crossing of precerebellar neurons (55) and activation of m-Numb mRNA translation in the gastric mucosa of mice (56), although the underlying mechanisms have not been elucidated. Our findings of Musashi association with GLD2 and target mRNA polyadenylation during Xenopus oocyte maturation may serve as a paradigm to explain Musashi-mediated activation of target mRNA translation in mammalian systems. Given the role of Musashi repression in promoting physiological and pathological stem cell self-renewal, our mechanistic insights into the bipartite function of Musashi may provide a foundation for future therapeutic control of stem cell function.
|
v3-fos-license
|
2020-07-16T09:06:53.203Z
|
2020-07-21T00:00:00.000
|
220670534
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/dt/d0dt02406f",
"pdf_hash": "c1fd96acbbde96e1b0a31b11d9670d9c8bdc93b2",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46390",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "3dcab8b4743659834e496455175e6e011ee57997",
"year": 2020
}
|
pes2o/s2orc
|
Synthesis and reactivity of alkaline-earth stannanide complexes by hydride-mediated distannane metathesis and organostannane dehydrogenation
The synthesis of heteroleptic complexes with calcium – and magnesium – tin bonds is described. The dimeric β -diketiminato calcium hydride complex, [(BDI)Ca( μ -H)] 2 ( I Ca ) reacts with Ph 3 Sn – SnPh 3 to provide the previously reported μ 2 -H bridged calcium stannanide dimer, [(BDI) 2 Ca 2 (SnPh 3 )( μ -H)] ( 3 ). Computational assessment of this reaction supports a mechanism involving a hypervalent stannate intermediate formed by nucleophilic attack of hydride on the distannane. Monomeric calcium stannanides, [(BDI)Ca(SnPh 3 )·OPPh 3 ] ( 8·OPPh 3 ) and [(BDI)Ca(SnPh 3 )·TMTHF] ( 8·TMTHF , TMTHF = 2,2,5,5-tetramethyl-tetrahydrofuran) were obtained from I Ca and Ph 3 Sn – SnPh 3 , after addition OPPh 3 or TMTHF. Both complexes were also synthesised by deprotonation of Ph 3 SnH by I Ca in the presence of the Lewis base. The calcium and magnesium THF adducts, [(BDI)Ca(SnPh 3 )·THF 2 ] ( 8·THF 2 ) and [(BDI)Mg(SnPh 3 )·THF] ( 9·THF ), were similarly prepared from [(BDI)Ca( μ -H)·(THF)] 2 ( I Ca ·THF 2 ) or [(BDI)Mg( μ -H)] 2 ( I Mg ) and Ph 3 SnH. An excess of THF or TMTHF was essential in order to obtain 8·TMTHF , 8·THF 2 and 9·THF in high yields whilst avoiding redistribution of the phenyl-tin ligand. The resulting Ae – Sn complexes were used as a source of [Ph 3 Sn] − in salt metathesis, to provide the known tristannane Ph 3 Sn – Sn( t -Bu) 2 – SnPh 3 ( 11 ). Nucleophilic addition or insertion with N , N ’ -di-iso-propylcarbodiimide provided the stannyl-amidinate complexes, [(BDI)Mg{(iPrN) 2 CSnPh 3 }] ( 12 ) and [(BDI)Ca{(iPrN) 2 CSnPh 3 }·L] ( 13·TMTHF , 13·THF , L = TMTHF, THF). The reactions and products were monitored and characterised by multinuclear NMR spectroscopy, whilst for compounds 8 , 9 , 12 , and 13·THF , the X-ray crystal structures are presented and discussed.
Introduction
Although Grignard's ubiquitous organomagnesium compounds have been widely used as synthetic reagents for over a century, the catalytic potential of alkaline-earth (Ae) reagents was largely overlooked until the past two decades. 1,2 By analogy to well-established lanthanide(III)-based catalysis, 3 Ae 2+ centres participate in redox-neutral catalytic cycles that are assembled from fundamental steps such as polarised 2σ-2π insertion-and 2σ-2σ metathesis. 2 In many cases, reactivity is better described by non-concerted processes involving attack of an Ae-bound nucleophile on a substrate, such as a silane, that is capable of expanding its coordination sphere. [4][5][6][7] As such, the heavier alkaline earths (Mg-Ba) are adept at mediating catalytic dehydrocoupling, 5,[8][9][10][11][12][13][14][15] hydrofunctionalisation, [16][17][18][19][20][21][22][23] and even reductive hydrogenation reactions. [24][25][26] We have previously reported the use of silylboranes to perform the catalytic 'disilacoupling' of amines and boranes; a non-dehydrogenative process thought to be dependent on Ae-mediated redoxneutral metathesis of N-H and Si-B σ-bonds (Scheme 1, top). 27 A model reaction between a β-diketiminato (BDI) magnesium butyl complex and the silylborane, PhMe 2 Si-Bpin (Bpin = pinacolatoboryl), resulted in elimination of nBu-Bpin and isolation of the magnesium silanide complex, 1 (Scheme 1, bottom). 27 Computational assessment has suggested that this reaction is best described by nucleophilic attack of a butyl group on the boron centre to provide a borate intermediate from which the silyl group is subsequently transferred to magnesium. 28 Bis( pinacolato)diboron (B 2 pin 2 ), which contains a non-polar B-B σ-bond, was shown to react in a similar way with [(BDI)MgBu] to provide an isolable diboranate complex, 2a (Scheme 1, bottom). Treatment of 2a with 4-dimethylaminopyridine (DMAP) promoted heterolysis of the B-B bond and delivered the terminal magnesium boryl species, 2b, which is a source of the nucleophilic [Bpin] − anion. 29 The Ae-centred manipulation of boron-, silicon-, and organic substrates has, thus, received significant attention. In contrast, comparable reports of Ae-mediated reactivity suitable for the construction of catalytic cycles involving organostannanes, which could provide an attractive route towards materials such as polystannanes 30,31 or act as sources of organostannane cross-coupling reagents, 32,33 are lacking. The majority of published Ae-mediated organotin chemistry focusses on the irreversible, stoichiometric reaction between the group 2 element and organotin halides, distannanes and silastannanes. [34][35][36][37][38][39][40][41] We recently reported that the BDI-calcium stannanide complexes 3 and 4 may be accessed through deprotonation of commercially available triphenylstannane by the soluble calcium hydride complex, I Ca (Scheme 2a). 42 Crystallographically characterised examples of Ae-Sn bonds were previously limited to the calcium and magnesium complexes 5 and 6, and the barium species 7 (Scheme 2b). Compound 5 was readily prepared by the oxidative-addition of hexamethyldistannane to calcium metal, 43 the synthesis of 6 and 7 utilised salt metathesis routes from group 1-metallated precursors. [44][45][46] Neither of these strategies, however, is likely to be amenable to incorporation into catalytic cycles. Since the formation of 3 and 4 is redox neutral at calcium and generates H 2 instead of insoluble salts as a by-product, therefore, it holds attractive potential for the development of Ae-based catalysts for processes such as hydrostannylation or stannane dehydrocoupling.
Distannanes are synthetically useful precursors to organotin radicals, 47-53 as well as 1,2-distannylated alkanes and alkenes via transition metal-catalysed distannylation of alkenes and alkynes. 54 Such organotin compounds are valuable cross-coupling reagents in organic synthesis. 49,55 Although the heterogeneous reaction of distannanes with solid alkali 56 and alkaline-earth 41,43 metals is well-known, the manipulation of distannanes by soluble s-block complexes has not been described. By analogy to the nucleophilic substitution-like process operative in the formation of Mg-silyl and -boryl species 1 and 2a/2b, we speculated that molecular calcium hydride and alkyl derivatives may react with Ph 3 Sn-SnPh 3 , providing an alternative route to nucleophilic calcium stannanide complexes. These investigations were motivated by the limitations encountered during our previously described synthesis of 3 and 4. 42 Firstly, I Ca also promotes redistribution of the organotin substrate, culminating in the generation of homoleptic SnPh 4 and ( presumably) SnH 4 , the latter of which rapidly decomposes to give Sn (0) and H 2 . Secondly, the strongly bound dimer of 3 retains a μ 2 -hydride ligand and the sub-sequent formation of 4 is low-yielding and slow, impeding any rational assessment of the reactivity of these unusual compounds. In this contribution, therefore, we describe the facile and high yielding synthesis of well-defined, monomeric Aestannanide complexes and a preliminary assessment of their nucleophilic reactivity.
Results and discussion
Reaction of I Ca with Ph 3 Sn-SnPh 3 and synthesis of compound 3 When I Ca was dissolved in C 6 D 6 with an equimolar quantity of Ph 3 Sn-SnPh 3 , the reaction mixture bubbled gently and darkened from pale-yellow to orange brown over the course of six hours. The respective μ-hydride and BDI-γ-CH proton resonances of I Ca at δ 4.27 and 4.83 ppm in the in situ 1 H NMR spectrum were replaced by two new singlets of relative intensity 2 : 1 at δ 4.75 and 3.83 ppm. The latter signal displayed unresolved 117/119 Sn satellites with 2 J ( 117/119 Sn-1 H) = 94 Hz, while the corresponding 119 Sn{ 1 H} NMR spectrum revealed complete consumption of the distannane and the appearance of a signal at δ −139.8 ppm, which was accompanied by the generation of Ph 4 Sn (δ −126 ppm). 57 These observations were consistent with the formation of the μ-H-bridged dimeric calcium stannanide, 3, whilst the brown colouration was assigned to formation of colloidal tin. 42 Although the slow formation of compound 4 was identified by its resonance at δ −158.5 ppm in the 119 Sn{ 1 H} NMR spectrum after a further five days at room temperature, complete conversion to this product was not obtained (Scheme 3).
Computational and mechanistic investigation of I Ca mediated Ph 3 Sn-SnPh 3 activation
In order to assess the mechanism of Ph 3 Sn-SnPh 3 activation, the reaction between I Ca and Ph 3 Sn-SnPh 3 was investigated by density functional theory (DFT, Fig. 1a, BP86 optimised, see ESI † for full details of computational methodology). Although we cannot, at this juncture, discount the operation of competitive single electron-based processes, consistent with the reported reactivity of compound I Ca thus far, 25 these calculations are suggestive of a metathesis-based reactivity. Following the initial formation of a van der Waals encounter complex (A, ΔG = +8.7 kcal mol −1 ), the distannane is subjected to nucleophilic attack by one of the μ 2 -hydride ligands (H a ) via transition state TS AB (Fig. 1b, ΔG ‡ = +12.7 kcal mol −1 ), at which the Sn a -Sn b bond is marginally elongated from 2.85 Å (calculated for Ph 3 Sn-SnPh 3 ) to 2.87 Å. Inspection of the Ca a -H a and H a -Sn a bond lengths (2.30 Å and 2.14 Å, respectively) in the subsequent intermediate, B (Fig. 1c, ΔG = +5.2 kcal mol −1 ), is suggestive of the transfer of H a to Sn a and the formation of a hypervalent stannate anion with a Sn a -Sn b distance of 2.95 Å. The Sn a -Sn b distance elongates to 3.55 Å in the transition state TS BC (ΔG ‡ = +8.2 kcal mol −1 ), facilitating cleavage of the stannate anion and concerted formation of a Ca a -Sn b bond (distance in TS BC = 2.24 Å) to give intermediate C (ΔG = +2.5 kcal mol −1 ). Subsequent dissociation of Ph 3 Sn a H a provides 3, at ΔG = −5.2 kcal mol −1 . Whilst the overall process is only moderately exergonic, the modest kinetic barrier is consistent with the room temperature reaction conditions. Meanwhile, rapid consumption of the resultant molecule of Ph 3 SnH provides a thermodynamic driving force, yielding H 2 and a second molecule of 3.
Experimental evidence in support of this mechanism was obtained by carrying out the analogous reaction between Ph 3 Sn-SnPh 3 and the n-hexyl-calcium complex [(BDI)Ca(Hex)] 2 (II). The relatively poor solubility of both substrates in C 6 D 6 and the greater steric demand of the hexyl ligand compared to the hydride of I Ca resulted in sluggish reaction kinetics. Nevertheless, after gentle heating to 40°C for 48 hours, the characteristic triplet corresponding to the α-CH 2 protons of II at δ −0.71 ppm was all but absent from the 1 H NMR spectrum. Although this observation was accompanied by almost complete redistribution to [(BDI) 2 Ca] 58 as the only soluble BDI-containing product, a resonance at δ −98.2 ppm in the corresponding 119 Sn{ 1 H} NMR spectrum revealed Ph 3 Sn(Hex) as the predominant tin-containing species. 59 The absence of any unambiguously identifiable alkyl or stannyl-calcium species, such as a n-hexyl-containing analogue of 3, may be attributed to the likely low thermal stability of such intermediates. Formation of Ph 3 Sn(Hex), however, can be rationalised by attack of a calcium-bound n-hexyl-nucleophile on the distannane, with subsequent transfer of [Ph 3 Sn] − to calcium and elimination of Ph 3 Sn(Hex) (Scheme 4). Whereas the tetraorganostannane is inert towards further reactivity under these conditions, a similar reaction with I Ca would yield Ph 3 SnH, which is rapidly deprotonated by a second molecule of I Ca to provide 3.
reaction of I Ca with Ph 3 SnH 42 or Ph 3 Sn-SnPh 3 . It was also anticipated that the μ 2 -hydride of 3 would provide a likely complication in subsequent efforts to assess the reactivity of the Ca-Sn bond. With this in mind, we speculated that addition of a Lewis base would encourage fragmentation of the dimer, result in reaction of both hydride ligands, and provide a highyielding route towards a well-defined monomeric calcium stannanide. Similar strategies have previously been applied suc-cessfully to achieve, for example, the isolation of monomeric magnesium complexes comprising terminal hydride and boryl ligands. 29,60,61 To this end, the reaction between I Ca and Ph 3 Sn-SnPh 3 was repeated and, after quantitative conversion of I Ca was ascertained by 1 H NMR spectroscopy, an equimolar equivalent of Ph 3 PO was added to the in situ generated solution of 3 (Scheme 5). Upon standing at room temperature for 24 hours, the reaction mixture took on an opaque dark-brown appearance and, in addition to several minor species, a major new BDI-γ-CH resonance was observed to have emerged at δ 5.23 ppm. The 119 Sn{ 1 H} NMR spectrum displayed a doublet at δ −146 ppm, whose coupling constant of 10 Hz is consistent with the sparse number of 3 J ( 31 P-117/119 Sn) coupling constants that have been reported. 62,63 Notwithstanding some minor peaks at δ 88-89 ppm and 71 ppm, consistent with this observation, the 31 P{ 1 H} NMR spectrum was free of evidence for any unligated phosphine oxide. The spectrum also comprised a major resonance at δ 36.4 ppm, which displayed unresolved 117/119 Sn satellites with an approximate coupling constant consistent with that observed in the 119 Sn{ 1 H} NMR spectrum. Recrystallisation of the crude product mixture from toluene/ hexane provided single-crystals of the monomeric Ph 3 POadduct 8·OPPh 3 in low yield, from which the molecular structure was determined by X-ray diffraction analysis (Fig. 2a). Crystals of the known compound [(BDI)Ca(OPPh 2 )] 2 were also obtained from the same sample and identified from the unit cell-parameters determined by X-ray diffraction. 64 This observation is consistent with the calcium hydride-mediated reduction chemistry previously reported for phosphine oxides, 64 and helps to account for the low yield and poor selectivity of this reaction. Compound 8·OPPh 3 was, however, obtained cleanly from the single-step reaction of I Ca with two equivalents each of Ph 3 SnH and Ph 3 PO in C 6 D 6 . Although high solubility of the crystalline product obtained from this reaction provided a low, unoptimised isolated yield, it displayed identical NMR resonances to those described above. Mindful of phosphine oxide reactivity towards reductive and/or nucleophilic alkaline-earth complexes, 61,64 it was decided that 2,2,5,5-tetramethyltetrahydrofuran (TMTHF) would be a better choice of Lewis-base. Westerhausen and coworkers have recently reported the use of TMTHF to prepare monomeric amide complexes [Ae{N(SiMe 3 ) 2 } 2 ·TMTHF] (Ae = Mg, Ca, Sr, Ba), in which the TMTHF ligand is highly labile in solution. 65 We reasoned that, whilst coordination of TMTHF would encourage monomerisation, its relatively labile binding compared to more common bases such as THF or DMAP, might enhance the reactivity of the resultant calcium stannanide complex. Hence, I Ca was dissolved in C 6 D 6 with two equivalents each of Ph 3 Sn-SnPh 3 and TMTHF (Scheme 6). Analysis of the crude reaction mixture by 1 H NMR spectroscopy showed complete conversion of the starting materials after two days at room temperature. A new product, 8·TMTHF, was characterised by a broadened resonance at δ 5.21 ppm corresponding to the γ-CH of the BDI ligand backbone. The 119 Sn{ 1 H} NMR spectrum comprised a resonance at δ −170.6 ppm in addition to a signal which was readily assigned as Ph 4 Sn at δ −126 ppm. Colourless block-like single crystals deposited from the reaction mixture overnight and were shown to be the monomeric TMTHF-solvated calcium stannanide, compound 8·TMTHF, by X-ray diffraction analysis (Fig. 2b). Compound 8·TMTHF could also be obtained by reacting I Ca with two equivalents of Ph 3 SnH in toluene (Scheme 6). Organostannane redistribution to Ph 4 Sn was completely circumvented by use of a ten-fold excess of TMTHF, and 8·TMTHF was deposited as colourless crystals on standing at room temperature overnight in 68% yield.
Once crystallised, 8·TMTHF is sparingly soluble in aromatic solvents but is readily soluble in THF. The 1 H NMR spectrum in d 8 -THF displayed a single, well-defined BDI environment, while resonances observed at δ 1.80 and 1.16 ppm suggested displacement of TMTHF from the calcium centre by the NMR solvent. The resultant 119 Sn{ 1 H} chemical shift was also substantially perturbed with a single resonance appearing at δ −137.3 ppm. Despite poor solubility, attempts to obtain NMR spectra of isolated and vacuum-dried crystals of 8·TMTHF in C 6 D 6 were successful, albeit the resonances were weak and broadened. Nevertheless, we were interested to find that two species were clearly discernible by 1 H NMR spectroscopy. Although both species were identifiable as 8·TMTHF by the BDI γ-CH resonance at δ 5.21 ppm and the Lewis base-free compound, 4 (δ 5.02 ppm for γ-CH of the BDI backbone), 42 their contrasting solubility in aromatic solvents prevented any confident, quantitative analysis of their relative abundance in solution. The apparent lability of TMTHF under vacuum was, however, further supported by the low relative intensity of its associated 1 H resonances when vacuum-dried samples were redissolved in d 8 -THF. In order to investigate the viability of 8·TMTHF as a convenient precursor to 4, therefore, isolated crystals were stirred in the solid state under vacuum at 80°C for sixteen hours. The resultant pale-yellow powder was only partially soluble in d 8 -toluene and, although the relative ratio of the two species determined by integration of the 1 H NMR spectrum was increased in favour of 4, substantial quantities of 8·TMTHF remained. Both species could be clearly discerned in the resulting 119 Sn{ 1 H} NMR spectrum, which comprised two resonances at δ −160.5 (4) and −170.7 ppm (8·TMTHF).
The solution-state behaviour of 8·TMTHF was also investigated by variable temperature 1 H NMR in d 8 -toluene. Whilst separate environments for 4 and 8·TMTHF could be discerned at 298 K, the γ-CH signals coalesced to a single broad resonance at δ 5.11 ppm above 318 K. Similarly, resonances assigned to free TMTHF experienced a pronounced and simultaneous upfield shift with increasing temperature. Although no more quantitative information could be extracted from these experiments, both of these observations suggest the establishment of a coordination-decoordination equilibrium when isolated 8·TMTHF samples are dissolved in arene solvents, facilitated by the lability of coordinated TMTHF.
The THF-solvated calcium hydride, [(BDI)CaH·THF] 2 (I Ca ·THF 2 ) was also reacted with two equivalents of Ph 3 SnH under a ten-fold excess of THF in toluene. After stirring overnight at room temperature, volatiles were removed under vacuum to provide the bis-THF adduct, 8·THF 2 , as a pale cream-coloured powder in high yield (Scheme 6). Its molecular structure (Fig. 2c) was determined by X-ray diffraction analysis performed on single crystals obtained by slow evaporation of a saturated toluene/THF solution. Compound 8·THF 2 is readily soluble in aromatic solvents and displays a well-defined 1 H NMR spectrum in C 6 D 6 or d 8 -toluene. The single 119 Sn environment resonates at δ −138.4 ppm in C 6 D 6 . When dissolved in d 8 -THF, the 1 H and 119 Sn{ 1 H} NMR spectra of 8·THF 2 were identical to that of 8·TMTHF in the same solvent, supporting the hypothesis that TMTHF is readily displaced from the calcium centre in THF-solution.
Whilst I Ca reacts rapidly with two equivalents of Ph 3 SnH to provide 3, the magnesium congener, [(BDI)MgH] 2 (I Mg ) reacts much more slowly. Although, approximately 50% of the initial Ph 3 SnH was observed to have redistributed to Ph 4 Sn after five days at room temperature (Scheme 7), the 1 H NMR spectrum showed no net consumption of I Mg . In addition, no other significant BDI-or Sn-containing products could be detected by either 1 H or 119 Sn{ 1 H} NMR spectroscopy. Repetition of the reaction in toluene with a 10-fold excess of THF, however, not only suppressed organostannane redistribution, but also accelerated consumption of Ph 3 SnH. The monomeric magnesium stannanide complex, 9·THF, was, thus, obtained in near quantitative yield as a colourless powder after stirring for 16 hours at room temperature and removal of volatiles under vacuum (Scheme 7). Single-crystals suitable for X-ray diffraction analysis were obtained by slow diffusion of hexane vapour into a THF solution at −30°C, providing confirmation of the solidstate structure (Fig. 2d). Compound 9·THF is readily soluble in aromatic solvents and THF and, albeit the 1 H resonances associated with the iso-propyl resonances were substantially broadened in C 6 D 6 at 25°C, both the 1 H and 13 C{ 1 H} NMR spectra were indicative of a single BDI-environment. Similarly, the 119 Sn{ 1 H} NMR spectrum displayed a single resonance at δ −155.4 ppm in C 6 D 6 .
X-ray diffraction analysis of 8·OPPh 3 , 8·THF 2 , 8·TMTHF and 9·THF Compounds 8·OPPh 3 , 8·THF 2 and 9·THF each crystallise in the monoclinic space group, P2 1 /c, whilst the crystal structure of 8·TMTHF adopts the P2 1 /m space group (Fig. 2a-d; selected bond distances and angles are presented in Table 1). Whilst the geometries of the four-coordinate calcium centres in 8·OPPh 3 and 8·TMTHF are best described as distorted tetrahedra, 9·THF adopts a near trigonal-pyramidal geometry, with the magnesium centre situated 0.557(1) Å above an equatorial plane defined by the nitrogen and tin atoms (Σ angles = 342°). The geometry of the five-coordinate calcium centre in 8·THF 2 can be considered as a heavily distorted trigonal bipyramid, with the [Ph 3 Sn] − and one THF ligand in the axial positions and the BDI ligand and the second THF molecule occupying the equatorial sites. Compound 8·TMTHF is bisected through C3, the C16-C21 phenyl ring, and the furan ring by a mirror plane that is intrinsic to the space group, such that half a molecule is present per asymmetric unit. The methyl groups of the TMTHF ligand were disordered across the crystallographic mirror and a weak anagostic interaction was observed between one methyl group and the calcium centre. This is manifested by a H33C-Ca1 distance of 2.85 (6) 43 The phosphine-oxide adduct, 8·OPPh 3 , displays an apparently more compressed coordination sphere, with shorter Ca-Sn, -N, and -O bonds than the furan-coordinated analogues. As a likely result of the steric congestion imposed by the bulky BDI ligand on the relatively weakly Lewis-basic triphenylstannanide anion, the Mg-Sn bond of 9·THF is longer (2.8340(6) Å) than that of the [Sn(SiMe 3 ) 3 ] − based complex, 6 (2.817(1) Å), the only other crystallographically characterised example of this type of bond in the literature. 45 In the calcium complexes, the metal centres project by 1.266(2) Å (8·THF 2 ), 1.449(2) Å (8·OPPh 3 ) and 1.575(2) Å (8·TMTHF) from the mean plane of the BDI ligand backbone, and the [Ph 3 Sn] − moiety is located above the BDI-ligand backbone. In contrast, the smaller ionic radius of magnesium results in a 0.742(2) Å displacement of the metal centre from the BDI-plane in 9·THF, forcing the stannanide ligand away from the iso-propyl groups, and into the 'pocket' defined by the flanking Dipp groups of the BDI-ligand. The calcium complexes all display slightly compressed C-Sn-C angles, thus distorting the geometry of the otherwise tetrahedral tin centres. The Ca1-O1-P1 angle of
Salt metathesis reactions with 8·TMTHF and 9·THF
With a series of well-defined monomeric Ae-stannanide derivatives in hand, we undertook an initial exploration of their reac-tivity. The highly ionic nature of the Ae-Sn bond suggests that they can be considered as hydrocarbon-soluble salts of the Ph 3 Sn − anion. As such, compounds 8·TMTHF and 9·THF were reacted with 0.5 equivalents of t-Bu 2 SnCl 2 in C 6 D 6 (Scheme 8). Both reactions provided a relatively clean 1 H NMR spectrum indicative of the formation of a single major BDI-containing product. A pair of resonances at δ −76.9 and −137.20 ppm in the 119 Sn{ 1 H} NMR spectrum was consistent with formation of the alternating tristannane, Ph 3 Sn-Sn(t-Bu) 2 -SnPh 3 (11). The identity of this compound was confirmed by X-ray diffraction and NMR spectroscopic analysis performed on single crystals isolated by fractional recrystallisation of the crude product mixture from hexane/toluene. Unfortunately, a satisfactory sample of the calcium-containing by-product (10 Ca ) could not be isolated from the reaction involving 8·TMTHF. When 9·THF was used, however, colourless crystalline blocks were deposited from the reaction mixture and identified as the known chloride complex, [(BDI)Mg(μ-Cl)] 2 10 Mg , by comparison to the published unit cell parameters and NMR spectra. 66 Compound 11 was first isolated in 33% yield (versus 78% in the current work) by Adams and Dräger in 1987 and synthesised by salt metathesis of the lithiated precursor, Ph 3 SnLi, with t-Bu 2 SnI 2 in THF and/or toluene. 67 Notably, although selectivity could be improved by variation of reaction stoichiometry, solvent polarity and concentration, this earlier approach yielded a mixture of Ph 3 Sn-capped tetra-, penta-, and hexastannanes such that the published crystal structure of 11 was as a component of a co-crystal with the tetrastannane, Ph 3 Sn- Sn(t-Bu) 2 -Sn(t-Bu) 2 -SnPh 3 . For completeness, therefore, the crystal structure of the pure tristannane, 11, is included in the ESI (Fig. S1 †). This reaction presents 8·TMTHF and 9·THF as promising alternatives to group 1 metallated organostannanes in salt metathesis reactions. [68][69][70][71] Insertion/nucleophilic addition of Ae-Sn bonds to N,N′-di-isopropylcarbodiimide As an initial assay of the potential utility of BDI-Ae stannanides to engage in catalytically relevant insertion reactions with unsaturated small molecules, 8·TMTHF, 8·THF 2 and 9·THF were reacted with one equivalent of N,N′-di-iso-propylcarbodiimide (DIC) in C 6 D 6 (Scheme 9). To the best of our knowledge, the resultant compounds provide the first reported C-organostannyl analogues of the ubiquitous amidinate class of N,N-donor anions. Compound 9·THF required 48 hours to cleanly convert DIC into the stannyl-amidinate complex, 12, at room temperature. Compound 12 was characterised by an upfield-shifted resonance at δ −186.9 ppm in the 119 Sn{ 1 H} NMR spectrum and a characteristic resonance at δ 181.9 ppm in the 13 C{ 1 H} NMR spectrum, corresponding to the central carbon atom of the Mg-ligated stannyl-aminidate ligand. The 1 H NMR spectrum was indicative of a single, symmetrical BDI environment, with equivalent N-iso-propyl environments and characteristic SnPh 3 resonances with 119/117 Sn satellites. THF was absent from the isolated product, which was obtained as a colourless powder by removal of volatiles under vacuum and which could be crystallised from methylcyclohexane at −30°C.
The resultant colourless blocks were subjected to single-crystal X-ray diffraction analysis to provide the molecular structure of compound 12 (Fig. 3).
The calcium complexes were more reactive towards DIC compared to 9·THF. Compound 8·TMTHF provided a clear, colourless solution of the calcium stannyl-amidinate, 13·TMTHF, after 60 minutes of sonication at room temperature. A further reaction at room temperature for 16 hours also provided quantitative spectroscopic conversion to 13·TMTHF, and 13·THF was obtained in a similar manner from 8·THF 2 (Scheme 9). Compounds 13·TMTHF and 13·THF were isolated as colourless powders after removing volatiles from the reaction mixture and displayed similar 1 H, 13 C{ 1 H}, and 119 Sn{ 1 H} NMR spectra to 12. Compared to 12, the 119 Sn{ 1 H} resonances of 13·TMTHF and 13·THF exhibited slightly upfield shifts to δ −193.8 and −196.1 ppm, respectively, whilst the stannyl-amidinate 'backbone' carbon nuclei resonated at δ 179.1 and 177.1 ppm in the corresponding 13 C{ 1 H} NMR spectra. The 119 Sn and 117 Sn satellites could also be clearly discerned for the tin-bonded amidinate 13 C resonance of 13·TMTHF to provide coupling constants of 1 J ( 119 Sn) = 360.4 Hz, 1 J ( 117 Sn) = 344.7 Hz. The BDI and stannyl-amidinate ligands of both complexes display a set of resonances indicative of high symmetry and, in contrast to 12, the presence of a single coordinated TMTHF or THF was clearly discerned by 1 H NMR spectroscopy. Although attempts to acquire single crystals of 13·TMTHF were unsuccessful, colourless plate-like single crystals suitable for X-ray analysis of 13·THF were obtained by cooling a hexane/methylcyclohexane solution to −30°C.
Compound 12 crystallises in the monoclinic space group, P2 1 /c, with one molecule of the magnesium complex and one disordered solvent region, equating to two methylcylohexane molecules, per unit cell. The solid state structure of 12 (Fig. 3) consists of a four-coordinate distorted tetrahedral magnesium centre, bonded to a BDI ligand via N1 and N2, and to a stannyl-amidinate ligand via N3 and N4. Although the Mg-N bond distances are all of a similar length, the magnesium centre is co-planar to the latter ligand but projects out of the mean N1-C2-C3-C4-N2 plane of the BDI ligand by 0.7483 (16) Å. The two bidentate ligands are effectively perpendicular, Scheme 9 Synthesis of stannylamidinates 12, 13·THF and 13·TMTHF. Ar = 2,6-di-isopropylphenyl, L = THF, TMTHF. Yields refer to unoptimised isolated yield, with quantitative spectroscopic conversion determined by 1 H NMR spectroscopy. Ar = 2,6-di-isopropylphenyl. such that the angle between the mean planes defined by N1-Mg1-N2 and N3-Mg1-N4-C30, is 90.47(6)°. Although no directly analogous stannyl-amidinate ligands have been reported previously, the C30-Sn1 bond length is unremarkable (2.2030(14) Å). The C30-N3 and C30-N4 bond lengths (1.3353 (19), 1.3311(19) Å) are slightly longer, and the N3-C30-N4 angle (114.1(11)°) is slightly more acute, than those previously reported in homo-and heteroleptic N,N′-di-iso-propylformamidinate calcium complexes (ca. 1.28(3)-1.328(6) Å, 118.6(4)-121.3(2)°). 72,73 Compound 13·THF crystallised in the monoclinic Cc space group and, unusually, contains four crystallographically independent molecules per unit cell (Fig. S2 †). Because of this, and a fall-off in diffraction intensity at higher Bragg angles arising from the thin plate-like morphology of the crystal, a detailed discussion of the structure is unwarranted. The gross features of the compound are, however, unambiguous and the four molecules display only minor structural differences. The X-ray crystal structure of the Ca1/Sn1-containing molecule is shown in Fig. 4. The BDI, stannyl-amidinate and THF ligands are arranged about the five-coordinate calcium centre such that N1, N2, and N3 lie in an approximate equatorial plane, with O1 located axially. The two chelating ligands are arranged in a similar way to those in 12, with an average twist angle of approximately 93°between the mean planes defined by N BDI -Ca-N BDI , and N am -Mg1-N am -C am , respectively. Significant variations in the structural metrics pertaining to the stannylamidinate ligands of 12 and 13·THF were not unambiguously discernible, but the larger ionic radius and higher coordination number of calcium results in displacement of the metal centre by approximately 2.5 Å from the mean plane of the BDI ligand backbone.
Conclusions
In conclusion, dimeric calcium and magnesium hydrides I Ca , I Ca ·THF 2 , and I Mg deprotonate triphenylstannane in the presence of an excess of coordinating Lewis base to provide clean access to well-defined monomeric Ae-stannanide complexes in good yield. Calcium stannanide complexes are also accessible through distannane heterolysis by nucleophilic attack of a calcium hydride. A preliminary exploration of the reactivity arising from the resultant compounds demonstrates their potential as well-defined, soluble sources of the [Ph 3 Sn] − anion in salt metathesis and nucleophilic addition reactions. Further work continues to explore the nature and reactivity of bonds between heavier p-block elements and the heavier alkaline earths.
Conflicts of interest
There are no conflicts to declare.
|
v3-fos-license
|
2017-09-24T01:38:25.932Z
|
2016-01-01T00:00:00.000
|
6513967
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/rdor/v17s1/1806-0013-rdor-17-s1-0085.pdf",
"pdf_hash": "2e798e41157999f5503c01d52809da5169f38304",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46391",
"s2fieldsofstudy": [],
"sha1": "2e798e41157999f5503c01d52809da5169f38304",
"year": 2016
}
|
pes2o/s2orc
|
Physical rehabilitation to treat neuropathic pain
BACKGROUND AND OBJECTIVES: Neuropathic pain is disabling, decreases quality of life, impairs professional performance, and limits social participation of patients living with excruciating pain. In this context, it is easy to see physical rehabilitation as facilitator of autonomy and mobility. However, therapeutic action goes beyond these actions. With technological advances, new approaches are proposed and it now it is possible to measure the performance of physical methods for pain modulation. CONTENTS: The innovative potential of physical rehabilitation to treat neuropathic pain is discussed. Reflections are made on therapeutic options such as: electrothermotherapy, manual therapy, physical exercise, transcranial stimulation with constant current, repetitive transcranial magnetic stimulation, visual mental exercises and mirror therapy, among others. Therapeutic modalities shall be addressed according to some neuropathic pain conditions, so the authors propose a parallel between specific pathologic mechanism of some neuropathic pain conditions and the neurophysiologic mechanism of the proposed therapeutic modality. CONCLUSION: In spite of different pathological mechanisms and different ways of physical and mental approach with patients, the importance of active participation of patients during the rehabilitation process has to be stressed.
INTRODUCTION
Neuropathic pain (NP) is a complex and heterogeneous condition with negative impact on physical, mental and professional quality of life, associated to high healthcare costs 1 .Described by the International Association for the Study of Pain (2011) in terms of injury or disease affecting peripheral or central somatosensory nervous system, NP affects 1% to 5% of world population.Part of its complexity is due to heterogeneous clinical manifestations with oscillates between constant or intermittent, spontaneous or induced pain, described by words such as shooting, stabbing, electric shock, burning, painful tingling, pressing, itching and pricking.This pathological condition is present in trigeminal neuralgia (TN), radicular NP and thalamic pain.This pain is associated to other clinical conditions, such as diabetic peripheral neuropathy affecting 46% of diabetes mellitus (DM) patients 2 ; postherpetic neuralgia affecting 10% of patients 3 months after acute herpes-zoster 3 ; chronic postoperative pain which may affect 53% of patients one year after laminectomy 4 ; post-cancer NP, such as chemotherapy-induced neuropathy, or neuropathy secondary to tumor antigens, or by neural structures compression; post-stroke (S) neuropathies; and post-spinal cord injury NP affecting 31% of patients 5 .In addition, there are some special cases such as complex regional pain syndrome (CRPS), nervous compression syndrome after burn injuries and phantom limb pain.NP is difficult to handle and is associated to patients' dissatisfaction with surgical, pharmacological and non-pharmacological treatments.Several best practices guidelines are proposed to standardize treatments, multiprofessional approaches and to promote better pain management in this population.However, in evaluating treatment models of such guidelines one can see how recent the use of physical rehabilitation is as adjuvant for NP treatment.In summary, documents directed to neuropathic pain in general are almost exclusively concentrated in the pharmacological approach 6,7 or just mention the participation of physical and mental health professionals, without determining their functions or objectives 8 .Physiotherapy and occupational therapy are addressed in the guidelines to treat post-spinal cord injury neuropathic pain 9 , with broad discussion on physical rehabilitation of this NP sub-population.Within this context, this article discusses the innovative potential of physical rehabilitation to treat neuropathic pain.There are several therapeutic options, such as electrothermotherapy, manual therapy with muscle energy techniques, mobilization without thrust and manipulation; cold therapy and traction, therapeutic massage, drug and cervical collar, physical strengthening exercise, stretching and aerobic training, constant current transcranial stimulation (CCTS), repetitive transcranial magnetic stimulation (rTMS), visual mental exercises, imagery, mirror therapy, somatonsensy rehabilitation.With neurophysiologic support, some modalities were largely studied, such as physical exercises, and have shown high scientific evidence of their therapeutic effects; however more recent ones, such as mirror therapy have shown low scientific evidence.Therapeutic approaches shall be addressed according to some NP conditions, so authors propose a parallel between specific pathologic mechanisms of some NP conditions and the neurophysiologic mechanism of the proposed therapeutic modality.Physical rehabilitation shall be discussed in the following clinical conditions: diabetic neuropathy, central nervous system (CNS) injuries, radiculopathy and peripheral nerves entrapment syndromes, special cases such as burn injuries, phantom limb pain and CRPS.
PHYSICAL REHABILITATION IN DIABETIC NEUROPATHY
Most prevalent NP symptom is associated to DM and affects approximately 46% of patients 2 .Systemic changes of this metabolic disease affect vascular and nervous tissues enabling the installation of distal symmetric sensory motor polyneuropathy, also described as diabetic neuropathy.There are some hypotheses for the pathophysiological mechanism to justify symmetric degeneration of sensory A-delta fibers and C fibers during periods of hyperglycemia and poor glycemic control 10 .Hyperglycemia is considered a vector speeding up the formation of advanced glycation end products (AGEs) in peripheral nerves and adjacent tissues, facilitating carbonyl and oxidative stress.These biochemical and metabolic changes induce morpho-functional changes such as (a) increased inflammatory mediators expression in myelinated or demyelinated neurons and Schwann cells and (b) functional changes in microvascular beds 10,11 .Progressive evolution of peripheral DM neuropathy impairs, among others, plantar sensitivity and skin injuries healing, requiring from patients excessive care with the health of their feet, otherwise this will lead patients from injury to necrosis and infection of skin and underlying tissues, the treatment of which is amputation of the injured segment.It was erroneously estimated that diabetic neuropathy was a protective factor to phantom limb sensation and pain after amputation.However, the prevalence of phantom pain complaints in lower limb is not different between diabetics with peripheral neuropathy (82% of cases) and non diabetics (89% of cases) 12 .Therapeutic modalities to treat diabetic neuropathy vary from prescription of exercises to prevent the disease to the use of technological advances such as rTMS and CCTS to promote cortical changes in such pain modulation.In general, exercise routines are major adjuvants associated to medical and pharmacological treatment for peripheral neuropathy.There are evidences of benefits such as (a) functional increase in macro and microvascular beds, (b) improved endothelial function, (c) decreased vasoconstriction and increased blood flow, (d) increased muscle strength, (e) increased cardio-respiratory resistance, (e) direct increase of glycemia levels and formation of products such as AGEs and (f ) decreased DM-associated comorbidities, such as systemic hypertension and atherosclerosis 10 .In comparing aerobic exercises versus strengthening, a systematic review and meta-analysis has observed that the former tends to further decrease glycosylated hemoglobin as compared to the latter 13 .Although vast literature showing beneficial effects of physical exercises on diabetic neuropathy, there are few studies with outcome on pain intensity.Three important studies have investigated the effect of physical exercises on diabetic neuropathy pain intensity.With aerobic and resisted training during 10 weeks with 17 diabetic neuropathy patients, Kluding et al. 14 have shown significant decrease in pain intensity measured by the visual analog scale (VAS) and decreased neuropathic symptoms, in addition to increased intraepidermal nervous fibers by skin biopsy.In spite of methodological limitations (e.g., small sample and lack of control group) this was one of the first studies to describe improved neuropathic symptoms and changes in skin nervous fibers after a program of exercises with NP diabetic patients 16 .Another aerobic exercise program lasting 16 weeks (n=14 patients) has shown significant improvement in decreasing general pain interference (walking, working, social relationship and sleep), however without changing pain intensity 15 .In line with these findings, a qualitative focus group study with 47 NP patients stresses the biopsychosocial complexity of their complaints, especially loss of functional capacity (walking, standing up, balance, orthostatism, mobility), decreased daily productivity (leisure activities, work), psychosocial consequences (anxiety, irritability, fear) and disorders (insomnia, non-restorative/regenerator sleep) 16 .Data such as these are consistent with reflections of studies with other chronic pain populations, where the pain relief objective does not overcome that of quality of life, quality of sleep and less interference of pain in daily life 17 .Among adverse effects of aerobic exercises in NP diabetic population, there is fatigue, however pain outcome is still poorly explored by protocols applied to this population 18 .Fatigue is also reported by non-diabetic patients after intense aerobic exercises.In looking for new alternatives for diabetic NP, two studies have evaluated the efficacy of the association of exercises in vibratory platforms to treat diabetic NP 19,20 .Studies with small sample sizes (n=8 and 10, respectively) and with low scientific evidence have shown significant decrease in pain intensity by the visual analog and NP scales 19 and improvement in strength and balance parameters 20 .Although some advocate the use of this equipment for NP physical rehabilitation, its physiological effect and improvement of biomechanical variables are still questionable.Electrotherapy has been described as physical therapy method with potential analgesic effect on NP, especially diabetic neuropathic pain.Studies have shown transcutaneous electrical nerve stimulation (TENS) as preferential method 21 .In a meta-analysis 22 , TENS to treat diabetic neuropathy had medium-term beneficial effects (6 and 12 weeks) in pain relief.TENS therapy was well tolerated and there have been no reports of adverse effects.Included studies used low frequency TENS (2-4Hz), but analgesic effects of different parameters were not analyzed.So, TENS may be effective to handle peripheral NP, but randomized, double-blind studies comparing parameters are still needed.Possible action mechanisms of electrotherapy would be related to local release of neurotransmitters, such as serotonin, adenosine triphosphate (ATP) and endorphins.Low frequency currents improve microcirculation and endoneural blood flow, which might be particularly interesting for diabetic neuropathy.Studies suggest that TENS activates analgesia-producing central mechanisms.There are evidences that low frequency TENS activates µ opioids in spinal cord and brainstem, and high frequency currents would produce effect by means of δ receptors 21 .Mima et al. 23 have observed that high frequency TENS also decreases motor evoked potential amplitude, suggesting a decrease in corticospinal and motor cortex excitability.Primary motor cortex (M1) modulation to control pain may also be obtained by noninvasive transcranial neuromodulation 24 .Most commonly used resources are rTMS and CCTS.Primary motor cortex excitatory modulation may be obtained with high frequency rTMS (in general above 5Hz) or anodal CCTS (anode in M1 and cathode in contralateral supraorbital region).Stimulation of these areas modulates thalamus and a series of other regions related to neural networks of brain pain processing, including sensory and emotional processing regions 25,26 .Kim et al. 27 have carried out a clinical trial with 60 NP patients divided in three groups submitted to active anodal CCTS in M1, in dorsolateral pre-frontal cortex (DLPFC) or simulated CCTS, for five consecutive days.Only M1 modulation was able to significantly decrease pain and the effect was maintained for up to four weeks after treatment.Similar result was found in patients with diabetic neuropathy and associated plantar fasciitis.After five days of anodal CCTS, patient had clinically important reduction of heel pain, associated to opioid withdrawal 28 .To date, just one study has investigated rTMS to specifically treat diabetic neuropathy patients' pain.Onesti et al. 29 used deep stimulation coil (H-coil), in five treatment sessions.Results were pain decrease associated to decrease of a physiological pain marker, the H reflex.In summary, physical rehabilitation in diabetic peripheral neuropathy involves: (a) aerobic exercises due to their neurovascular benefits, more than strengthening exercises, (b) TENS and (c) rTMS.However, treatment protocols, parameters, intensity, time and duration and, especially studies with outcomes on pain are necessary to improve understanding and prescription of such modalities.
PHYSICAL REHABILITATION FOR NEUROPATHIC PAIN AFTER CENTRAL NERVOUS SYSTEM INJURIES
Injuries or dysfunctions affecting the CNS may induce difficult to control pain, known as central pain.Most common causes are traumatic spinal cord injuries or diseases coursing with myelopathy, brain injuries, especially those involving the thalamus, multiple sclerosis and CNS tumors.In such conditions, injuries may be themselves the source of symptoms.It is also possible that endogenous inhibitory mechanisms are affected, generating pain by inhibitory failure.In all these situations, patients shall have different NP presentations and physical treatment is part of the list of therapeutic possibilities.Depending on the case, it will be possible to interfere with dysfunctional mechanisms with techniques stimulating endogenous pain inhibition such as neuromodulation with electric or magnetic transcranial or peripheral stimulation, acupuncture, exercises and mental practices.Next, a specific approach for each possibility where there are evidences of clinical use shall be described.Noninvasive transcranial neuromodulation with transcranial electric stimulation with direct current was initially clinically observed in patients with pain secondary to spinal cord injury.Fregni et al. 30 have shown that five days of anodal CCTS in M1 decreases patients' pain without interfering with the neuropsychological condition or being associated to the presence of anxiety and depression.Two recent meta-analyses have shown that anodal CCTS in M1 has moderate analgesic effect on spinal cord injury pain 31,32 .The review of Boldt et al. 31 has also involved other noninvasive neuromodulatory resources such as rTMS and acupuncture, which however have not shown effect on these patients' pain.
CCTS was studied as a way to control multiple sclerosis pain in 2010 33 .This study has shown that five consecutive days of anodal CCTS in M1 were able to decrease pain and improve quality of life of multiple sclerosis patients.No subsequent study has directly addressed pain in these patients, but rather fatigue and psychiatric disorders.High frequency rTMS is the most common modality to control pain.In case of spinal cord injury, this modality has shown controversial effects for pain control.Ylmaz et al. 34 have shown significant pain decrease in these patients, however statistical analysis used in this study has not compared interaction between group and time.Both hand and lower limb stimulation seem to have better effect as compared to simulated stimulation 35 , however this was not shown in an initial study 36 .An important factor might be the number of administered pulses, since studies with around 500 to 1000 pulses by session have not shown analgesic effect 36,37 .Low frequency TENS, another electrical stimulation modality, however for peripheral nerves, may also have analgesic effect 38,39 .Exercises were studied in some clinical trials as ways to control pain in spinal cord injury patients.A systematic review with meta-analysis 31 has shown that this physical intervention modality had the best effect on pain decrease, among a series of non-pharmacological interventions involving neuromodulation, acupuncture, TENS, self-hypnosis and cognitive behavioral therapy.
PHYSICAL REHABILITATION IN RADICULOPATHIES AND PERI-PHERAL NERVE ENTRAPMENT SYNDROMES
Nervous compression is an underlying cause of some neuropathic pains.Several anatomic areas are described as being more vulnerable to vasculo-nervous compression, such as: osteofibrous channels of the distal pathway of brachial plexus nerves (e.g., carpal tunnel), lumbar plexus (sciatic pain) or in the entrance of trigeminal nerve root in the cerebellopontine cistern.Compressive neuropathies have a central component, in addition to a biomechanical cause by compression.Among neuropathic pains involving compression components there are: trigeminal neuralgia, radicular NP and NP in burned patients.Trigeminal neuralgia is an excruciating, allodynia pain with high intensity.Vascular compression of trigeminal nerve dorsal root may be caused by superior cerebellar artery, by intracranial vascular abnormality, internal carotid artery aneurysm, as well as by tumors, foreign body, bone injuries or osteotomas.Although these results justify trigeminal paroxysmal pain, some studies describe excruciating facial pain without compression, as well as there are reports of compression without facial pain 40 .So, the bioresonance theory is proposed 41 where changes in vibration frequency of structures adjacent to the trigeminal nerve resound and may induce nervous fibers injury, change impulse transmission and finally result in facial pain.Other findings include nervous root demyelination, as in the case of multiple sclerosis 42 .Other trigeminal neuralgia causes would be trauma, viral infection such as postherpetic neuralgia, and genetic causes 40 .Most prevalent surgical approach would be Gamma Knife surgery, microvascular 43 or radiofrequency 40 decompression.Studies describing the effects of conservative non-pharmacological treatments are few, so they have still low scientific evidence.Physiotherapy, occupational therapy and other therapeutic approaches acting by movement, as well as using electric and thermal stimuli, tend to promote physical function improvement and functionality gain.The success of combining these therapeutic approaches with drug therapy would be indicated in early pain stages, within a multimodal context, although some patients benefit from this functional approach to treat pain 44 .Burst TENS during 20 to 40 days on the affected nerve, with evaluation after one and three months, has shown significant decrease in pain intensity evaluated by VAS, without report of adverse effects 45 .Similar effects are identified when applying TENS in refractory trigeminal neuralgia or with partial response to drugs, with slightly better effect of the constant current as compared to burst mode 46 .Although results of these studies are beneficial, both have some methodological limitations which weaken the effect for generalization of results.Hagenaker et al. 47 .have shown that anodal CCTS in M1, 20 minutes per day for 14 days, decreases pain of trigeminal neuralgia patients in 18%, result with low clinical effect.As opposed to trigeminal neuralgias, cervical and lumbar radiculopathies have better prognosis with conservative methods.Radiculopathy is nervous root injury caused by space obstruction, caused by intervertebral disc her-niation, spondylosis or osteophytes.This bone and ligament compression triggers pain irradiated to upper and lower limbs, weakness, paresthesia and sensation of edema 48 .The objectives of the conservative treatment are movement amplitude gain, strengthening, coordination and balance.Manual therapy is used in radiculopathies with muscle energy techniques, mobilization without thrust, manipulation; cold therapy and traction modalities, therapeutic massage, medication and cervical collar 48,49 .Manual therapy and exercises present high scientific evidence for short term pain relief, moderate evidence for improved quality of life and low scientific evidence for long term effect in decreasing pain and incapacity or function gain 50 .A randomized study with 42 cervical radiculopathy patients has compared the effect of mechanical cervical traction to manual cervical traction, both associated to segmental mobilization and therapeutic exercises.Frequency of intervention was three weekly sessions for six weeks.At treatment completion, both groups had improved pain and incapacity, without significant difference between groups, although there is a clinical trend toward better effect of mechanical as compared to manual traction 51 .A systematic review estimates that 57% of patients improve when submitted to manual therapy or neural mobilization and 46% when submitted to muscle energy technique.This systematic review has included just four studies.Authors emphasize the lack of randomized studies, control groups and comparison among therapeutic resources.Another important limitation of the studies on manual therapy is the lack of description of the techniques used in tested protocols 48 .Regardless of treatment of cervical radiculopathies being exclusively conservative or associated to surgery, prevention of recurrences and functional recovery involve muscle training, medication, cervical traction, manual therapy or cervical collar.Exercises are becoming popular due to their promising effects in function and mobility gain.Muscle training involves strengthening, in general by isometric exercises of deep cervical flexor muscles, shoulders retraction and scapular muscles.Stretching exercises especially address neck, shoulder girdle and chest.Some studies combine aerobic exercises to this analytical training.The effectiveness of this modality may be identified by body function and structures gain, by increased social participation and levels of activity and by improved personal factors, such as mood and satisfaction 52 .Clinicians and researchers discuss the level of evidence of such therapeutic modalities.On the one hand, researchers aim at identifying the therapeutic effect of each technique independently.On the other hand, clinicians advocate the combination of techniques and manifest perceptions of effect enhancement by interaction among them.There are studies investigating combined treatments and which have confirmed clinicians' perceptions, however without assessing the level of efficacy of each modality and their interactions.Improved functionality and pain relief are significant findings of treatment with combined therapeutic modalities 53 .Most accepted mechanism for lumbar radiculopathies is propulsion of nucleus pulposus with breakage of intervertebral disc fibrous ring, causing immune irritation in adjacent nervous roots.This change in intervertebral disc induces biomechanical imbalance in lumbar spine and promotes neurologic deficit associated to the involvement of the nervous root, impairing joint alignment of lumbar spine vertebrae.Physiotherapists tend to consider this change in vertebral alignment as a key-point for the pathologic mechanics of radiculopathies.A reaction of joint protection inducing peripheral nerve irritation, or vice-versa, is described.Manipulations (therapeutic maneuvers in high velocity and low movement amplitude) and segmental mobilizations (low velocity maneuvers) are popular for promoting biomechanical adjustment with movements directed to recovery of lumbar spine movement amplitude and nervous root decompression.In parallel, they foster discussions on the challenge of such techniques to assure safety and efficacy of acute radiculopathy treatment, because there would be risk of injury with joint involvement in the intervertebral disc 55 .Meta-analyses and systematic reviews highlight the low risk and equivalent efficacy to conventional treatments such as analgesics, physiotherapy, exercises and posture/spine schools 56,57 .Physical exercise is also part of the list of therapeutic options for radiculopathies.Regular exercises of moderate intensity tend to favor sensory motor functions and the regeneration potential of injured axons.In summary, results of animal model studies propose this effect of exercise by increased neurotrophin levels, neural activity recoding, peripheral sensory reorganization, supraspinal neuronal excitability change and cortical sensory projections 58 .For example, a study by Cobianchi et al. 59 has compared two treadmill running protocols in mice after chronic sciatic nerve constriction injury.Brief protocol of exercises (1h per day in the 5 days following experimental nervous injury) has decreased NP symptoms (decreased allodynia, decreased microglia and astrocytes expression).A brief running protocol has promoted acceleration of sciatic nerve regeneration process.A different animal model study with treadmill walking protocol in low intensity complements the mechanism of exercise-induced analgesia by treadmill exercise with serotoninergic involvement, in addition to decreasing pro-inflammatory cytokines 60 .Although evidences of physical exercise effect in animal models being attractive, similar studies in humans are scarce in the literature 61 .Some techniques aim at rebalancing body structures by neural and adjacent tissues mobilization (neural mobilization), however still showing low therapeutic effect to treat peripheral nerve injury or compression 62 .High frequency rTMS was better than anodal CCTS or simulated treatments in decreasing pain secondary to lumbar radiculopathy 63 .Cervical collars are in general prescribed to decrease foramen compression and, as a consequence, nervous roots inflammation by means of limiting vertebral movement amplitude.Kuijper et al. 64 have evaluated cervical collor or physiotherapy versus expectant therapy in patients with recent cervical radiculopathy and have concluded that, during the acute phase, both approaches promote short-term relief.Zarghooni et al. 65 have reviewed the use and indication of cervical and lumbar orthoses to treat acute and chronic spinal diseases highlighting the lack of high quality studies and observing that with regard to lumbar vests there are no scientific evidences offering support to their therapeutic use, as as those proving their ineffectiveness.
A clinical randomized and controlled study has evaluated the effect of contrast baths in the pre and postoperative treatment of carpal tunnel syndrome, having hand volume as studied variable.Although not having evaluated pain in studied groups, authors concluded that contrast baths were not effective to decrease hand edema, and discuss the lack of randomized trials to support the clinical use of this therapeutic technique, including standardized protocols 66 .
Burned patients
Generalized neuropathy after burn injuries is a common morbidity, however of difficult diagnosis and handling of nervous compression syndromes after thermal or electric burns, however poorly documented for chemical burns.Of early manifestation in the first months following burn injury, to late manifestations more than four years after injury, it requires systematic evaluations as well as early NP diagnosis in burned patients 67 .It affects between 2% and 84% of patients and the cause is difficult to evaluate due to the complex metabolic nature in burned patients, to subsequent use of neurotic antibiotics and other numerous iatrogenic neuropathy causes.Peripheral neuropathy is one of the most common neuromuscular complications in burned patients and probably the less diagnosed and inadequately treated 68,69 .Nervous compression is manifested by electric and thermal shock sensations which are described as pain worsening with signs of allodynia, hyperalgesia and itching.Males tend to have more neuropathy signs as compared to females, and patients with body surface burns above 10% have higher prevalence of neuropathic pain 70 .Surgical intervention for nerve decompression is required for most patients 67 .Nerve decompression is considered an effective procedure promoting motor and sensory dysfunction improvement after late burn injury in limbs, although some patients remain with paresthesia and "drop foot", morbidities affecting a small number of patients 71 .In an observational longitudinal study with burned patients, 46% of cases had carpal tunnel nervous compression 67 .Hands integrity is critical for daily activities with special attention to their rehabilitation by the importance of highly affected precision and functionality by the risk of injury.Contractures are most common complications identified by physiotherapists.Functional, post-burn injury treatment concentrates in the use of splints, long physiotherapy sessions to prevent edema, contractures and to maintain or improve movement amplitude, recover function, prevent keloids, regain muscle strength and esthetic and functional results.
In a report of four years of experience with rehabilitation after burn injuries, these gains are highlighted, however authors do not address NP treatment in burned patients 72 .In general, there is a gap in burned patients' care on part of professionals acting on function and motor autonomy gain.There are long descriptions and discussions on movement amplitude gain and contractures prevention 73 without addressing the frequent morbidity of peripheral neuropathy.The reflection on the subject is important since these are professionals with daily contact with patients, whose maneuvers for mobility gain in general induce pain and they should be alert for late, post-burn injury nervous compression signs.Severe burn injuries may produce scars with excruciating pain difficult to handle due to poor response to conventional treatments.In search of therapeutic alternatives, Cuignet et al. 7,4 , in Belgium, have applied an analgesia protocol with electroacupuncture in 32 patients with signs and symptoms of NP and pathological burn scars, without favorable response to previous treatments.Following the protocol and according to Traditional Chinese Medicine (TCM), 30-minute sessions three times a week, they have observed decrease in pain intensity, relevant only for patients with localized burn injuries, without significant effect on patients with generalized hyperalgesia.Somatosensory rehabilitation in post-burn injury NP patients has different effects, in some cases improving sensitivity and in others not 75 .Somatosensory rehabilitation aims at addressing hypoesthesic zones, based on somatosensory system neuroplasticity concepts and proposes that mechanical allodynia masks sensitivity and could be initially treated by hypoesthesic areas.A protocol has tested 17 burned patients for discrimination of touch, textures perception and vibratory stimulus only in hypoesthesic regions, tested with monofilaments.Six patients had no longer allodynia after treatment of their hypoesthesic regions.However, study results have not shown significant effect of the protocol in this sample.Further studies should be carried out to answer to several methodological gaps of this study.A different potential approach to treat burned patients with NP would be rTMS.Aiming at evaluating neuroplastic changes associated to chronic NP in this population, Portilla et al. 76 have carried out a double-blind study of a session with sham excitability and primary motor cortex anode, contralateral to worst body pain symptom.However, this first study has not shown clinical changes in a single session.As well as previous studies, there are early evidences with this case series that, similar to chronic pain patients, burned patients have central mechanism with decreased cortical sensitivity and could benefit from rTMS.
Phantom limb pain
Referred severe pain in amputated body segment by surgical procedure, be it by disease, such as in diabetic neuropathy, by trauma or electric shock, it is estimated that this complaint reaches 50% to 90% of amputees 77 and that only 5% to10% of them complain of severe phantom limb pain 78 .The prevalence of phantom limb pain varies according to characteristics of the population and pre, peri and post-amputation anesthetic procedures 79 .Phantom limb pain phenomenon was described in the 16 th Century by Ambroise Paré and its mechanism is still not clear.Since its description, several hypotheses were proposed, since peripheral causes, such as neuroma, increased peripheral axon excitability, trigger-points; spinal cord mechanisms, spinal cord reorganization after peripheral nerve injury changes; to CNS system changes.Based on technological imaging diagnosis advances, recent studies have shown primary somatosensory cortex reorganization after amputation, being these findings correlated to phantom limb pain magnitude 80 .This reorganization is due to maladaptive changes in different neuromatrix levels and may be associated to poor body representation in patients by the lack of afferent signal due to limb or segment amputation 81 .Another curiosity is that, in addition to decreased gray matter in motor cortex of amputees, there is increased gray matter in visual field regions, suggesting the hypothesis of compensation of sensory motor loss with visual adaptation mechanisms to maintain body function and integrity 82 .Therapeutic modalities for phantom limb pain management lack scientific evidences and are clinically classified as unsatisfactory.Patients self-evaluate their therapeutic experience and establish a success rate to treatments.Pharmacological approaches vary from 67 to 21%, for opioids and steroids, respectively.Interventionist treatments vary from 58% for subarachnoid opioid pump to up to 20% for contralateral anesthesia.Among non-pharmacological options, relaxation is associated to 41% suc-cess, TENS to 28%, and hypnosis with the lowest success rate, 19% 77 .New therapeutic approaches based on neurophysiologic concepts use discriminative sensory training 83 , virtual mental exercises 84 and mirror image projection 85 and renew patients and health professionals hope.Mirror therapy for phantom limb pain patients stresses the importance of establishing the illusion of the phantom limb in the mirror projection of the intact limb.Those bilaterally amputated cannot be submitted to this therapeutic option.Treatment effect depends on the virtual sensation of "having back the amputated limb" in the mirror projection.While patients look at the mirror and visualize their phantom segment and by means of motor commands for both limbs they perform symmetric movements and notice that their phantom limb "obeys" to their commands, this allows the reconstruction of the body image and in some cases partial pain decrease 85 .Therapy consists in developing voluntary movements ability of the phantom limb and several protocols are described, from light to complex movements, performed slowly or rapidly, association of tactile stimuli to movement, supervised or not exercises.Patients are oriented to stop the activity in case of adverse effects, such as dizziness and emotional discomfort by visual sensation of the phantom limb, in addition to pain intensity increase.Some patients have described cramps when "performing voluntary phantom limb movement" (confirmed by mirror projection).Due to the risk of worsening pain, some physiotherapists prefer evolving to voluntary movements only after reaching painless movement amplitude in the phantom limb mirror projection 86,87 .This phenomenon has also been described in CRPS patients 88 .In comparing the effect of mirror therapy and TENS, Tilak et al. 89 have shown that both therapies induce significant pain intensity decrease, without statistical or clinical difference between both methods applied during four weeks.Mental image projection activates sensory and motor cortex and its regular practice could promote enough stimuli to reorganize cortical neurons and potentially reorganize phantom pain 84,90 .So, visualization and observation of movements are used with phantom segments, associated or not to meditation and relaxation.At the end of six weeks, with weekly frequency, with relaxation, body perception and imagined movements, Maclver et al. 84 have observed constant pain intensity decrease correlated to cortical reorganization by means of functional magnetic resonance images.Equivalent methods are used in patients with bilateral lower limbs amputation.Tung et al. 91 have compared the effect of observing and mentally visualizing the movement of amputated segments.Patients who visually observed the movement had pain intensity decrease, as opposed to the other group, who had no difference.These are promising results because they stress the importance of motor-visual stimuli as facilitators of this cortical reorganization.The combination of therapeutic modalities with progressive muscle relaxation training, mental imagery and exercises for amputated limb, provides significant and clinical pain intensity improvement as compared to the control group at the end of four weeks with sessions twice a week 92 .
Complex regional pain syndrome
Similar to phantom pain approaches, mirror therapy, mental imagery and discriminative sensory training strategies are applied to patients with CRPS.Effects are questionable, varying among studies, however promising.As opposed to these innovative approaches to treat difficult to handle pain, clinical practice uses passive excruciating pain-inducing mobilizations and contrast baths.Both techniques are questionable when compared to plasticity and metaplasticity mechanisms of pain nociceptive pathways.Spatial and temporal sum of pain would be a risk factor of these mechanisms, making clinical presentation even more complex and difficult to handle.Although questionable due to their neurophysiologic effects, 70% of professionals report using this approach in their clinical practice 93 .Contrast baths are described as therapeutic modality where two baths, warm and cold, are alternated, being classically used to treat extremities due to the easiness of immerging such segments 94 .Although being described as alternatives to treat neuropathic pain, rheumathoid hands and diabetic feet, there are no scientific evidences supporting their clinical use.Hypothetically, their effects are based on vasodilation and vasoconstriction provoked by alternating temperature, where the goal is to mimic voluntary muscle contraction, to decrease edema, stiffness and, as a consequence, pain.Risks of this modality are recognized in patients with sensitivity loss or alteration, such as in diabetic neuropathy, however this popularization of risks does not seem to be largely applied in the clinical ap-proach of CRPS patients.Few studies have evaluated primary motor cortex modulation with noninvasive brain stimulation techniques to treat CRPS.Pleger et al. 95 have shown a transient effect during repetitive EMT in this condition.Picarelly et al. 96 have used high frequency transcranial magnetic stimulation applied to this region in CRPS type I patients and have shown pain decrease during a period of 10 consecutive sessions, with improvement in affective pain components.Peripheral stimulation with surface electrodes (TENS) seems to be more effective when associated to exercises 97 .However, physical treatments, including exercises, mental simulation of movements (motor imagery), mirror therapy, manual lymphatic drainage, sensory discrimination training, stellate ganglion block with low intensity ultrasound or the use of pulsed electromagnetic fields have not shown clinically significant effects on these patients 98 .
CONCLUSION
NP theories and mechanisms complement each other.Opting for just one hypothesis induces health professionals and researchers to lose action potential to reverse clinical presentation or provide relief.There are several therapeutic options to treat central and peripheral neuropathic pain.One should stress old approaches which remain with good level of evidence such as TENS.Other old methods are no longer used in research, but persist in the clinical practice, such as contrast baths.In addition to questionable neurological and/or physiological effect, this modality poses a potential risk in cases of sensitivity deficit, that is, especially in the NP population.As with chronic pain, there is a trend to improve active approaches, those requiring patients' physical and mental effort, such as: exercises, imagery, tactile discrimination and mirror therapy.Technological advances, such as rTMS and CCTS currents, also gain space in the therapeutic approach of this population although needing further studies.Rehabilitation can and should be included as adjuvant to treat NP patients.Rehabilitation provides further autonomy and functionality to daily lives of these patients and these are, in some cases, patients' motivational objectives, being above pain relief.
|
v3-fos-license
|
2019-04-05T00:11:35.445Z
|
2018-05-28T00:00:00.000
|
91184900
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4360/10/6/589/pdf",
"pdf_hash": "2ab80e4a76cd28d264b6c99d2a1a00ad40ad51c0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46392",
"s2fieldsofstudy": [
"Biology",
"Materials Science",
"Chemistry"
],
"sha1": "2ab80e4a76cd28d264b6c99d2a1a00ad40ad51c0",
"year": 2018
}
|
pes2o/s2orc
|
Size-Controllable Enzymatic Synthesis of Short Hairpin RNA Nanoparticles by Controlling the Rate of RNA Polymerization
Thanks to a wide range of biological functions of RNA, and advancements in nanotechnology, RNA nanotechnology has developed in multiple ways for RNA-based therapeutics. In particular, among RNA engineering techniques, enzymatic self-assembly of RNA structures has gained great attention for its high packing density of RNA, with a low cost and one-pot synthetic process. However, manipulation of the overall size of particles, especially a reduction in size, has not been studied in depth. Here, we reported the enzymatic self-assembly of short hairpin RNA particles for the downregulation of target genes, and a rational approach to the manipulation of the resultant particle size. This is the first report of the size-controllable enzymatic self-assembly of short hairpin RNA nanoparticles. While keeping all the benefits of an enzymatic approach, the overall size of the RNA particles was controlled on a scale of 2 μm to 100 nm, falling within the therapeutically applicable size range.
Introduction
RNA nanotechnology has developed enormously by taking advantage of the intrinsic properties of RNA, and advancements in nanotechnology [1][2][3]. The major advantages of RNA encompass its programmability, and its various biological functions. The programmability of RNA originates from its simple molecular structure, comprised of four monomers-adenine, uracil, guanine, and cytosine. Each monomer is involved in the enzymatic process of RNA polymerization, and the resulting polymeric strand of RNA can be hybridized in a sequence-specific manner by Watson-Crick base pairing [4,5]. In addition, the resulting strand of RNA can be folded into complicated higher-order structures with various interactions, and other related structures, such as stem-loop structures, sticky ends, and loop-loop interactions [6,7]. Furthermore, RNA has a wide range of biological functions, and takes various forms dependent on each one. Messenger RNAs, transfer RNAs, ribosomal RNAs, ribozymes, riboswitches, small interfering RNAs, microRNAs, and small nuclear RNAs are examples of various types of RNAs with distinct functions.
To fully exploit RNA, RNA nanotechnology has been developed in many ways. For the synthesis of functional RNA structures, a simple hybridization approach was used for the generation of a polyhedral structure, while a crossover technique and RNA architectonics were used for the generation of more complex RNA structures [8][9][10][11]. These RNA-based nanostructures were applied to a range of therapeutic applications, including the targeting of specific tumor cells in vivo, immunotherapy, and the delivery of chemotherapy drugs [2,12,13]. Among the range of RNA engineering approaches, the enzymatic replication and simultaneous self-assembly of RNA has gained great attention due to its potential of synthesizing various types of self-assembled structures for designated biological functions, ranging from nanometer-sized particles to centimeter-scale membranes [14][15][16]. An enzymatic approach has the benefit of achieving high packing density at a lower cost when compared with other approaches, and of enabling the one-step fabrication of artificial RNA-based structures of various sizes. Taking advantage of the benefits of building RNA structures through an enzymatic process, RNA-based structures have been widely used for therapeutic purposes such as non-viral protein expression, anti-proliferation of tumors, and treatment of choroidal neovascularization [15,17,18].
Here, we report the size-controllable assembly of short hairpin RNA (shRNA) particles via an enzymatic approach. While the synthesis of size-controllable RNA nanoparticles was possible using complementary rolling circle transcription with two types of circular DNA templates [19,20], this represents the first report of the size-controllable enzymatic self-assembly of shRNA nanoparticles from single type circular DNA template. While keeping all the benefits of an enzymatic approach, the overall size of the RNA particles was controlled on a scale of 2 µm to 100 nm, falling within the therapeutically applicable size range [21,22].
Circularization of Anti-GFP shRNA Encoding Single-Stranded DNA
A 92-base-long phosphorylated linear DNA for anti-GFP shRNA was mixed with a 22-base-long primer DNA for T7 RNA polymerase at a final concentration of 3.0 µM in nuclease-free water (the sequences of oligonucleotides are shown in Table 1). The mixture was heated for 2 min at 95 • C, and gradually cooled down to 25 • C for 1 h using a thermal cycler (T100 TM Thermal Cycler, Bio-Rad, Hercules, CA, USA). T4 ligase (0.06 units/µL) was then introduced to the mixture with ligase buffer (30 mM Tris-HCl, 10 mM MgCl 2 , 10 mM DTT, and 1 mM ATP). The solution was incubated overnight at room temperature to ligate the nick in the circularized DNA. Table 1. DNA sequences for synthesizing anti-GFP, and negative control short hairpin RNA nanoparticles (shRNA-NPs). The complementary DNA sequence for the promoter region of T7 RNA polymerase is shown in blue, and the primer for T7 RNA polymerase binds to the blue region to form the promoter region of T7 RNA polymerase.
DNA Strands Length (nt) Sequence
Linear DNA for anti-GFP shRNA-NPs
Synthesis of shRNA Particles
The circular DNA at final concentrations of 0.03 µM, 0.1 µM, or 0.3 µM were mixed with 8 mM ribonucleotide solution mix, reaction buffer (80 mM Tris, 40 mM NaCl, 12 mM MgCl 2 , 4 mM spermidine, and 20 mM dithiothreitol; pH 7.8), and 5 units/µL T7 RNA polymerase. For the rolling circle transcription (RCT) reaction, the reaction solution was incubated for 20 h at 37 • C. The final reaction solution was briefly sonicated, before the shRNA particles were purified with a Zeba TM Desalting Column, following the manufacturer's protocol. For the synthesis of cy5-labeled shRNA-NPs, cy5-UTP (final concentration of 20 µM) was added to the RCT reaction mixture at the beginning of the incubation process. To remove unincorporated cy5-UTP, the shRNA-NPs were purified with a Zeba TM Desalting Column after the RCT reaction.
Characterization
A field emission scanning electron microscope (FE-SEM) (S-5000H, Hitachi, Tokyo, Japan) and an atomic force microscope (AFM) (Park NX10, Park Systems, Suwon, South Korea) were used to obtain high-resolution digital images of the shRNA particles. The shRNA particles for SEM imaging were deposited onto a silicon wafer, and coated with Pt after being dried. For AFM imaging, 10 µL of the reaction mixture was diluted in nuclease-free water containing 5 mM Tris-HCl and 5 mM MgCl 2 . After incubating the mixture at 4 • C for 30 min, 50 µL of the mixture was deposited onto freshly cleaved mica surface, and further incubated at 4 • C for 30 min. Following the incubation, the mica surface was rinsed with deionized water to remove salts, and nitrogen gas was then sprayed onto the surface for three to five seconds to remove the remaining solution. The samples were scanned in non-contact mode with NC-NCH tips (Park Systems). Nanoparticle tracking analysis was carried out with Nanoparticle Tracking Analysis (NanoSight NS300, Malvern, Worcestershire, UK). Transmission electron microscopy (TEM) (JEM-2100F, JEOL, Tokyo, Japan) was employed to characterize the shRNA-NPs, operating at an accelerated voltage of 200 kV, before TEM-based energy dispersion X-ray (EDX) was used to analyze the chemical compositions of the shRNA-NPs. For the preparation of samples, the shRNA-NPs were deposited onto Lacey Formvar/carbon-coated copper grids, and then air-dried at room temperature.
Intracellular Uptake Analysis
HeLa cells were grown in DMEM, supplemented with 10% FBS, 100 units/mL penicillin, 100 µg/mL streptomycin, and 1% Antibiotic-Antimycotic, at 37 • C in a humidified atmosphere, supplemented with 5% CO 2 . The cells were passaged routinely to maintain exponential growth. One day prior to transfection (~90% confluence), the cells were trypsinized, diluted with fresh medium, and transferred to 24-well plates (50,000 cells per well). The cy5-labeled shRNA-NPs were covered with the delivery carrier, a Stemfect TM RNA Transfection Kit, following the manufacturer's instructions. Specifically, the cy5-labeled shRNA-NPs were mixed with the Stemfect TM RNA Transfection reagent (RNA:reagent = 1:3 w/v) in PBS solution, and incubated for 10 min at room temperature. After diluting the shRNA-NPs/reagent solution with media, the samples were treated to cells for 4 h at a concentration of 2.5 µg/mL at 37 • C. After further incubation for 12 h in fresh serum-containing media, the cells were detached from the plates by treatment with trypsin-EDTA solution, and washed three times with PBS. The cells were analyzed by NucleoCounter (NC-3000, Chemometec, Allerod, Denmark). The data were analyzed using the FlowJo software.
In Vitro Gene Knockdown Analysis
HeLa-GFP cells were transferred to 96-well plates (7000 cells per well). The shRNA-NPs were covered with the transfection reagent from the Stemfect TM RNA Transfection Kit, prior to transfection according to the manufacturer's instructions. Then, the cells were treated with various concentrations of the covered shRNA-NPs, ranging from 0.1 to 2.5 µg/mL. After 24 h of treatment, cells were washed with DPBS, and lysed with CelLytic M. The green fluorescence from each well containing the lysed cells was detected by a microplate reader (Synergy HT, BioTek, Winooski, VT, USA), and then normalized with the green fluorescence from the well containing untreated HeLa-GFP cell lysates, to obtain relative GFP expression. Cell viabilities were assessed with CCK-8 according to the manufacturer's instructions.
Statistical Analysis
Data in this study were represented as mean values of independent measurements (n = 4). Error bars indicated mean standard deviations of each experiment. Statistical analysis was performed with a Student's t-test. Statistical significance was assigned for p < 0.05 (95% confidence level).
Results and Discussion
For the synthesis of shRNA particles, circularized template DNA was first prepared as described in previous reports [23,24]. Then, to synthesize size-controlled shRNA particles via rolling circle transcription, various concentrations ranging from 0.03 to 3.0 µM were mixed with T7 RNA polymerase, and other reaction components for polymerizing RNA strands ( Figure 1). In the previous study, we reported that the concentration ratio of circular DNA to polymerase played an important role in controlling the size of RNA nanoparticles [19,20]. That was for complementary rolling circle transcription which involved two types of circular DNAs that were complementary to each other. Here, we report that the same logic could be applied to controlling the sizes of shRNA particles via rolling circle transcription that only involved one type of circular DNA. Downsizing of the RNA nanoparticles was achieved not only by increasing the concentration of polymerase, but also by decreasing the concentration of circular DNA in the RCT reaction. This was coherent with previous findings that the ratio of circular DNA to polymerase was the main factor in manipulating the sizes of final self-assembled products. Interestingly, however, manipulation of the concentration ratio of circular DNA to polymerase, by increasing concentration of RNA polymerase with one type of circular DNA, did not result in reducing the size of the particles ( Figure S1). To better understand the dependence of the synthetic process and its resulting products on the concentration of the circularized DNA template in the RCT reaction, RNA amplification under different synthetic conditions was observed in real time with RT-PCR. For the initial four hours of the RCT reaction, increasing the concentration of the circularized DNA template resulted in a higher number of RNA strands synthesized by T7 RNA polymerase (Figure 2A). Interestingly, changes in fluorescence intensity were directly proportional to the concentration of circular DNAs, even though the concentrations of the monomers and the enzymes were kept the same. This indicated that the amount of RNA generated by the RCT reaction could be controlled by changing the concentration of circular DNAs. This is logical when considering that polymerases are likely to work on fully constructed circular DNAs, rather than those already being used by the polymerase to transcribe RNA strands. Correspondingly, the RCT products from various concentrations of circular DNAs at 4 h of reaction were closely examined by atomic force microscopy (AFM, Figure 2B). Coherently with the real-time PCR result, the amount of synthesized RNA was greater at higher concentrations of template DNA. In addition, the level of entanglement of RNA strands was also higher at higher concentrations of template DNA.
In order to test the therapeutic efficacy of shRNA particles, we chose shRNA nanoparticles (shRNA-NPs) synthesized with 0.03 μM of circular DNA to further characterize their nature. The selfassembled shRNA-NPs had spherical structures that were 100 nm in diameter, revealed by scanning electron microscopy (SEM, Figure 1B), and nanoparticle tracking analysis (NTA, Figure 2C). Also, NTA results indicated a narrow size distribution of the nanoparticles, indicating that the shRNA-NPs had a favorable size allowing the cellular internalization of the shRNAs to be released from the nanoparticles for the regulation of target genes. Also, transmission electron microscopy (TEM) images revealed that the nanoparticles were homogeneous from their core to their outermost region, indicating that shRNAs were present throughout the particles ( Figure 2D). The chemical composition of the nanoparticles included phosphorus and nitrogen, which indicated the existence of the phosphate backbones and nucleobases of nucleic acids in the structure ( Figure 2E). Each of the atomic contents was evenly distributed according to TEM-based mapping, showing a uniform distribution of nucleic acids in the nanoparticles. To better understand the dependence of the synthetic process and its resulting products on the concentration of the circularized DNA template in the RCT reaction, RNA amplification under different synthetic conditions was observed in real time with RT-PCR. For the initial four hours of the RCT reaction, increasing the concentration of the circularized DNA template resulted in a higher number of RNA strands synthesized by T7 RNA polymerase (Figure 2A). Interestingly, changes in fluorescence intensity were directly proportional to the concentration of circular DNAs, even though the concentrations of the monomers and the enzymes were kept the same. This indicated that the amount of RNA generated by the RCT reaction could be controlled by changing the concentration of circular DNAs. This is logical when considering that polymerases are likely to work on fully constructed circular DNAs, rather than those already being used by the polymerase to transcribe RNA strands. Correspondingly, the RCT products from various concentrations of circular DNAs at 4 h of reaction were closely examined by atomic force microscopy (AFM, Figure 2B). Coherently with the real-time PCR result, the amount of synthesized RNA was greater at higher concentrations of template DNA. In addition, the level of entanglement of RNA strands was also higher at higher concentrations of template DNA.
In order to test the therapeutic efficacy of shRNA particles, we chose shRNA nanoparticles (shRNA-NPs) synthesized with 0.03 µM of circular DNA to further characterize their nature. The self-assembled shRNA-NPs had spherical structures that were 100 nm in diameter, revealed by scanning electron microscopy (SEM, Figure 1B), and nanoparticle tracking analysis (NTA, Figure 2C). Also, NTA results indicated a narrow size distribution of the nanoparticles, indicating that the shRNA-NPs had a favorable size allowing the cellular internalization of the shRNAs to be released from the nanoparticles for the regulation of target genes. Also, transmission electron microscopy (TEM) images revealed that the nanoparticles were homogeneous from their core to their outermost region, indicating that shRNAs were present throughout the particles ( Figure 2D). The chemical composition of the nanoparticles included phosphorus and nitrogen, which indicated the existence of the phosphate backbones and nucleobases of nucleic acids in the structure ( Figure 2E). Each of the atomic contents was evenly distributed according to TEM-based mapping, showing a uniform distribution of nucleic acids in the nanoparticles. While images taken from SEM and TEM provide only two-dimensional information about the nanoparticles, the height of the nanoparticles was measured with atomic force microscopy, which is known for excellent z-axial resolution, so as to determine the full three-dimensional structure ( Figure 2F). The overall height of the nanoparticles was about 100 nm, which supports the shRNA-NPs having fully spherical structures, along with previous data on the nanoparticles ( Figure 2G). This was further supported by the NTA, which tracked individual nanoparticles exhibiting Brownian motion ( Figure 2C, Supplementary Video S1). The hydrodynamic diameter of the nanoparticles was also measured to be approximately 100 nm, which indicated that the nanoparticles stayed compact even in hydrated conditions.
To evaluate cellular internalization readily by cytometry, the shRNA-NPs were enzymatically engineered with fluorescence-emitting modified nucleotides, cyanine 5-UTPs (cy5-UTP). By introducing cy5-UTPs into the RCT reaction, T7 RNA polymerase incorporated these molecules into synthesized RNA strands. Thus, the resulting self-assembled shRNA-NPs emitted red fluorescence, resulting in an increased red fluorescence signal when analyzed by image cytometry (Figure 3A). While images taken from SEM and TEM provide only two-dimensional information about the nanoparticles, the height of the nanoparticles was measured with atomic force microscopy, which is known for excellent z-axial resolution, so as to determine the full three-dimensional structure ( Figure 2F). The overall height of the nanoparticles was about 100 nm, which supports the shRNA-NPs having fully spherical structures, along with previous data on the nanoparticles ( Figure 2G). This was further supported by the NTA, which tracked individual nanoparticles exhibiting Brownian motion ( Figure 2C, Supplementary Video S1). The hydrodynamic diameter of the nanoparticles was also measured to be approximately 100 nm, which indicated that the nanoparticles stayed compact even in hydrated conditions.
To evaluate cellular internalization readily by cytometry, the shRNA-NPs were enzymatically engineered with fluorescence-emitting modified nucleotides, cyanine 5-UTPs (cy5-UTP). By introducing cy5-UTPs into the RCT reaction, T7 RNA polymerase incorporated these molecules into synthesized RNA strands. Thus, the resulting self-assembled shRNA-NPs emitted red fluorescence, resulting in an increased red fluorescence signal when analyzed by image cytometry ( Figure 3A). Accordingly, cytometry analysis was also carried out for HeLa cells treated with cy5-labeled shRNA-NPs, for the evaluation of cellular internalization. As shown in Figure 3B, there was a 6-fold increase in the number of cells showing a strong cy5 signal, which indicated a successful cellular uptake of the shRNA-NPs.
Polymers 2018, 10, x FOR PEER REVIEW 7 of 9 Accordingly, cytometry analysis was also carried out for HeLa cells treated with cy5-labeled shRNA-NPs, for the evaluation of cellular internalization. As shown in Figure 3B, there was a 6-fold increase in the number of cells showing a strong cy5 signal, which indicated a successful cellular uptake of the shRNA-NPs. To test the gene-silencing activities of shRNA-NPs, anti-GFP shRNA-NPs were treated to HeLa cells stably expressing GFP (HeLa-GFP). The expression levels of GFP went down by 50% when treated with 2.5 μg RNA per ml ( Figure 3C), while the treatment with shRNA-NPs did not cause any significant cytotoxic effects ( Figure 3D). Furthermore, non-targeting shRNA-NPs showed negligible effects on the level of GFP expression, which proved a target-specific gene-regulation effect of shRNA-NPs, without causing any adverse side effects.
Conclusions
In summary, we developed size-controllable synthesis of short hairpin RNA nanoparticles via an enzymatic approach. By controlling the concentrations of template circular DNAs while maintaining the same concentration of RNA polymerase in the RCT reaction, the sizes of the shRNA particles could be reduced from 2 μm to 100 nm. The resulting shRNA-NPs were fully characterized, and controlling their sizes was possible through controlling the level of entanglement of RNA strands by changing concentrations of template DNA in the RCT reaction. In addition, the shRNA-NPs had a favorable size for therapeutic applications. Through such a development in RNA engineering, we envision that we can step forward into the real-world applications of RNA therapeutics. To test the gene-silencing activities of shRNA-NPs, anti-GFP shRNA-NPs were treated to HeLa cells stably expressing GFP (HeLa-GFP). The expression levels of GFP went down by 50% when treated with 2.5 µg RNA per ml ( Figure 3C), while the treatment with shRNA-NPs did not cause any significant cytotoxic effects ( Figure 3D). Furthermore, non-targeting shRNA-NPs showed negligible effects on the level of GFP expression, which proved a target-specific gene-regulation effect of shRNA-NPs, without causing any adverse side effects.
Conclusions
In summary, we developed size-controllable synthesis of short hairpin RNA nanoparticles via an enzymatic approach. By controlling the concentrations of template circular DNAs while maintaining the same concentration of RNA polymerase in the RCT reaction, the sizes of the shRNA particles could be reduced from 2 µm to 100 nm. The resulting shRNA-NPs were fully characterized, and controlling their sizes was possible through controlling the level of entanglement of RNA strands by changing concentrations of template DNA in the RCT reaction. In addition, the shRNA-NPs had a favorable size for therapeutic applications. Through such a development in RNA engineering, we envision that we can step forward into the real-world applications of RNA therapeutics.
|
v3-fos-license
|
2023-02-15T16:09:56.866Z
|
2023-02-13T00:00:00.000
|
256865391
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "d29c7ba3050a18f6adfb6165b48ec1bba0a76e45",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46393",
"s2fieldsofstudy": [
"Sociology",
"Geography"
],
"sha1": "9797285f898231768445e571764dc268fb56795f",
"year": 2023
}
|
pes2o/s2orc
|
The Experience of International Students: Biographical Narratives and Identities
This article presents the findings of a qualitative and comparative study on the cultural experience of international students in North and South Europe. I employ a narrative approach and the focus of the research revolves around the autoethnographies of 25 international students in Helsinki and 25 in Florence. The narratives were prompted by in-depth interviews following a template divided into the three phases of travel conceived as a rite of passage: departure–preliminal, transition–liminal, arrival–postliminal. To explore the meaning of geographical mobility in the lives of these young people, I sketched a series of self-identity types connected to mobility experiences: the Fated, whose biographical premises are all pushing-pulling toward the status of international student; the Academic, who is fascinated by the idea of becoming a worldly intellectual and sees the PhD as a natural step; the Globetrotter, whose mobility is an end in itself: the goal is the next city-country; the Explorer, who is abroad looking for new cultural challenges, with a genuine desire to discover and understand specific places and people; the Runaway, who feels like a stranger at home and is escaping abroad for political or existential reasons. I believe that the interpretation of international students’ sense of self-identity can be fruitfully achieved through the narrative path I have constructed (or a similar one).
Introduction
Over recent decades, studying abroad has increased vastly to become an institutionalized practice. International student mobility has expanded constantly over the past 20 years. In 2019, 6.1 million tertiary students worldwide had traversed a national border, more than doubling the 2007 figure (OECD 2021, 215). Years 2020-2021 will probably represent a watershed due to the coronavirus pandemic. It remains to be seen whether this is a turning point toward the decline of international student mobility or a temporary pause. In this study, the key question is not a quantitative one. It is not about the "how much" of the trend, but about "what" and "who." Here I explore what will decline or resurge, who these international students are in biographical and narrative terms, and how the mobility experience impacts the way they imagine their future lives (Cuzzocrea and Mandich 2016).
Transnational and global higher education mobility is an impressive social and cultural phenomenon that has been, and still is, accompanied by two cultural knowledge gaps influencing each other: the scarcity of popular public narratives (such as books, movies, or documentaries) and of academic narrative-biographical studies. Because of this double and interdependent cultural void, I believe there is further scope for interpreting this study area in depth.
Secondary analysis of non-scholarly material revealed that it is very hard to find a book or a movie that represents the individual and collective meaning of traveling, living, and studying in another country. Here I mean a complete story, a narrative that introduces the protagonists at home in their familiar surroundings, portraying the sociocultural background along with the trigger factors leading to the decision to go elsewhere, and then the experience abroad, and how it affects the students' life path and self-identity. To the best of my knowledge, the only story representing the "Euro-Cosmopolitan" student is still the 2002 movie L'auberge Espagnole.
Even within the academic domain, beyond quantitative socio-demographic data there is little qualitative empirical material for a deeper understanding of students' overall experience from an authentically narrative-biographical and comparative slant. 1 Moreover, existing studies 2 tend to suffer from separation into distinct disciplines (sociology, social and cultural anthropology, communication studies, education, social psychology, cultural studies) and thematic fields (youth, human development, mobility studies, cultural globalization and cosmopolitanism, education). In addition, in most studies, the protagonist (the international student) is conceived and represented as a mere "agent": we know very little-and sometimes nothing-of his/her biographical past. To find out what young people are really getting out of higher education mobility, we need to hear their stories and explore the implications of the educational travel within the broader context of their lives: past, present, and future.
This qualitative and comparative research addresses, among others, the question: Is studying abroad fostering cultural openness through real opportunities to meet the Other in the flesh or does it support a more aesthetic "touristic gaze" (Urry and Larsen 2011)? In other words, can international students' narratives disclose other core meanings beyond the implicit/explicit, instrumental-expressive significance of cultural self-empowerment abroad (Papatsiba 2005) to meet the challenges of a globalized world? I believe so.
This research led me to formulate another core, and potentially foundational, meaning for the studying-abroad experience that I conceptualize as existential. If the possibility of imagining oneself "elsewhere" is a fruit of late modernity and cultural globalization, an imaginative consequence for the construction of individual identity is a sort of quest for "one's place in the world": a personal promised land. If "developing a cosmopolitan identity is at the core of discourses on educational travels" (Huang 2021, 4), this study revealed how studying abroad can be considered a dual transitional passage toward adulthood and global citizenship (Birindelli 2018). In short: a training camp to become cosmopolitan (Hannerz 1990(Hannerz , 2005, 3 albeit without exactly knowing what "cosmopolitan student" means. The objective of the research project The Cultural Experience of International Students is to interpret the biographical meanings attributed by a group of 50 international students to their educational, cultural, and overall life experience abroad, in Finland (North Europe) and Italy (South Europe). The study employs a cultural and narrative-biographical approach developed over the years (Birindelli 2014(Birindelli , 2022, and its overall purpose is to reconstruct students' narrative self-identity "at home" in their past, during their stay abroad (present), and in their attempt to imagine themselves either in the host country, back at home, or elsewhere (future). Hence, in this study I encouraged and collected partial autobiographies-autoethnographies-autoethnography being the description of self as seen within another culture (Ellis and Bochner 2000). The collected stories have an authentic narrative and biographical structure: incipit-ruit-exit; past-present-future. To the best of my knowledge, this kind of systematic study has never been carried out for international students. 1 Rare examples of qualitative research of this kind on international students are the following: Murphy-Lejeune (2003); Papatsiba (2005); Brooks and Waters (2010); Carlson (2011); Krzaklewska (2013); Cuzzocrea and Mandich (2016). Murphy-Lejeune's 2003 study and approach is probably that which has most affinities with mine. However, her research consists of 60 interviews featuring precise questions exploring the researcher's legitimate theory of the "stranger," forms of adaptation to a new culture, and culture shock. My narrative template is simpler and does not influence or guide students' self-narratives. I believe it can be more easily adapted by other researchers engaged in the field of international student mobility, studying abroad, or youth studies in general. From a conceptual perspective, and especially in interpreting the studying abroad experience within the lifecycle framework, this study has most affinities with those of Cuzzocrea and Cairns (2020) and Cuzzocrea and Krzaklewska (2022): I have discussed this article with these authors and I thank them in the Acknowledgments. Regarding the autoethnographic-autobiographic method, in most existing literature this tends to be the researcher's personal account of the experience of living and working in contexts other than the country of origin (Daskalaki 2012;Daskalaki et al. 2016). An example of this method applied to studying abroad can be found in Nilemar and Brown (2019) where, adopting an autoethnographic approach, the author offers a first-person account of the past experience of being an international student in various countries. To the best of my knowledge, in available literature there is no study with 50 autoethnographies (or autobiographical narratives) written by international students from all inhabited continents. 2 On International Student Mobility (ISM) studies, see among others the following: Byram and Feng (Eds. 2006); Feyen and Krzaklewska (Eds. 2013); Dervin and Regis (Eds. 2015); Van Mol (2014); Cairns (2014); Cairns et al. (2018). On the biographical approach to youth transition, see Henderson et al. (2006). 3 A clear and foundational overview of the cosmopolitan scholarly debate can be found in the introduction to the book edited by Vertovec and Cohen (2002). Besides Hannerz, leading scholars who reactivated the debate upon cosmopolitanism in the sociological and anthropological field are, among others, respectively Beck (2006) and Appiah (2006). For an educational, philosophical, and historical perspectives, see Papastephanou (Ed. 2016). For a discussion of the cosmopolitan bildung of young people, see Cicchelli (2012), and for the cosmopolitan habitus of international students, see Igarashi and Saito (2014). Regarding cosmopolitanism as an empirical field of research in the social sciences, see Kendall et al. (2009). An extensive overview of the multi-and interdisciplinary cosmopolitan debate can be found in Delanty and Inglis (Eds. 2011), Delanty (2009, Skrbis and Woodward (2013), and Cicchelli and Mesure (2020). In this research, "identity" is understood as "narrative identity." Narrative identity (Ricoeur 1984(Ricoeur , 1985(Ricoeur , 1988Bruner 1990;Burke and Stets 2009) is always a retrospective interpretation of the past and an anticipation of the future. Identity is a process made up of the relations that the individual-along with the intersubjective inside-outside group recognition-establishes, through memory, between the different and shifting perceptions of oneself in relation to the Other, and to the wider sense of belonging to a (national, regional, transnational, global) collective identity (Birindelli 2022, 14).
Developing a Novel Method
Starting from September 2016, I found and recruited 25 international students at the University of Helsinki (Finland, representing Northern Europe) and 25 at the University of Florence (Italy, representing Southern Europe). 4 I was able to contact the participants of the research in Finland with the support of supervisor Keijo Rahkonen, at the time head of the Department of Social Sciences and Vice-Dean for international affairs. Thanks to his help, I was also able to contact international students' associations. 5 In Italy, this part of the research process was more difficult because students' associations-and student life in general-do not exist, for either international or local students: 6 another finding that will be interesting to analyze in the future. In Florence, I was supported by the pro-rectors of the University of Florence at the time, Marco Bindi and Giorgia Giovannetti, and by other directors of International Master Programs, especially Valeria Fargion. Although Florence can be considered an international student city, I struggled to find participants for the research. In order to preserve a comparability criterion, they had to be international master students in a public university. In Florence, the majority of international students are undergraduates-European Erasmus students, US abroad programs (Birindelli 2020)-or PhD students, for instance enrolled at the European University Institute.
Regarding the sampling method, qualitative inquiry typically focuses in depth on relatively small, purposively selected samples. Unlike random sampling, with logic derived from statistical probability theory, the logic and power of purposive sampling lies in selecting informationrich cases to study in depth, from which one can learn a great deal about issues of central importance to the inquiry (Denzin and Lincoln 2000). I might add that my research project gleaned knowledge from individuals with particular expertise. Expert sampling is particularly useful where there is a lack of empirical evidence in an area, which is the case of my investigation.
I carried out the fieldwork during the academic year 2016-2017, and in two follow-up phases in 2020 and 2021 when I started to share interpretations in a Facebook group discussion. Thus, I studied the international students over a 5-year timespan. I am unaware of any similar longitudinal study ever having been carried out for international students, so that it represents a pioneering contribution (in the field).
At the start of the study, after some informal conversations with undergraduate students, I decided that the participants in the research would be Master students. I saw the Master students as the "older brothers and sisters" of the younger undergraduates I had already researched in the past; being older, they would be able to reconstruct their stories with a greater degree of reflexivity. Also, I imagined that I might meet young people who had previous exchange experience and, so to speak, insisted on going abroad. This anticipation was correct and became a significant dimension of the study. In the attempt to come up with narrative identity types, I first created the Veteran, a student who went through at least three levels of studying-abroad experiences: high school, undergraduate, and master. I later dropped this type because almost all the international students I met in this study can be considered "veterans." Overall, I was able to achieve a balance in terms of age (the average was 26 at the time of the final draft of the autoethnography) and gender, and to get students from all inhabited continents involved, while for the area of study/ discipline I had to settle for the available International Masters in the two universities. In this study, I chose to research students from different nations instead of focusing and comparing just a few nationalities-such as in the quantitative study (survey) of Finn et al. (2022). This is because I was interested in exploring the international students' sense of belonging to a "cosmopolitan group" and concentrating on similarities-differences of their experience abroad grounded in their biographical narratives rather than national or regional culture.
The phases of the fieldwork were broken down as follows.
1) In-depth interviews (approximately 1 h and a half) -These were based on a narrative template.
-The full transcription of the interview, with my preliminary interpretation of some key points and/or questions, was given to the participant who revised, integrated, changed, and deleted at will. This transcription prompted the autobiographic-autoethnographic reflection.
2) Autoethnography (average of 15 pages, single spaced) -The autoethnography was based on the same template as the interview. The participant was also free to develop other topics and/or to decide to develop certain themes of the template in greater or lesser depth, since it was his/her story.
The overall design of the research project was constructed to foster participants' involvement as active subjects rather than passive objects. The possibility of reading transcripts and preliminary interpretations in every single phase made the participants feel that they were not only actors, but also to a degree scriptwriters and directors of the research process, thus laying the foundations for a cooperative enterprise based on trust.
Liminality and the Narrative Template: a Heuristic Conceptual Overlay
The life stories of young people constitute the backbone of this research itinerary. All biographical accounts and other research steps were guided by a narrative template. The template is divided into sections addressing the three basic phases of travel (departure-transition-arrival) creating a heuristic overlay with the "three phase architecture" at the heart of the study: narrative structure (incipit-ruit-exit); existential time (past-present-future); rites of passage (preliminal-liminal-postliminal); human development (young-young adult-adult); sense of belonging to a collectivity (national-European-cosmopolitan). This overlapping (Table 1) constitutes a novelty both in the method and in the theoretical construction of the research itinerary.
While conceptual dimensions such as phases of travel, existential time, and narrative structure do not require further explanation, the inextricable connection between rites of passage (liminality), human development transition, and sense of belonging to a collectivity calls for clarification.
Young people today face a dual human development transitional passage: (1) in the dimension of individual and generational identity (youth-adulthood); (2) in the sphere of collective identity and sense of belonging (national-global). For those who were born (or moved to live) in the old continent, it becomes a triple liminal phase: becoming adults, citizens of a globalized world, and Europeans (Birindelli 2018).
As regards the sense of belonging to a collectivity, the traditional human development transition from youth to adulthood also needs to be conceived in a transnational and global manner: space and "spatial reflexivity" (Cairns 2014;Cairns et al. 2018) need to be incorporated into the study of young people. Yet this interpretative approach should not simplistically dismiss the role of the country-state in both its concrete structural impact in peoples' lives-all passports worldwide, for instance, are still nation-based-and cultural reverberations: the country of origin, although redefined, remains a powerful source of symbolic meanings molding collective and individual identities. The outside and "inside-out" narrative of a collective identity is as important as the crossboundary one (Birindelli 2019). We can in fact transcend a boundary only by recognizing its existence. Additionally, by analyzing international students' narratives, I have realized how the key criterion defining their international status is precisely being inter-national(s). The cosmopolitan status of a student is inevitably conferred by the marker of joining, somewhere abroad, a group of people coming from different countries. The cosmopolitan game simply ends if you take away the reference to the country of origin-tricky, isn't it? There can be no universality without particularity and vice versa: "Cosmopolitanism, in short, is empty without its cosmos" (Harvey 2000, 554).
Regarding the liminal dimension, analysis reveals that studying abroad can constitute a rite of passage (Van Gennep 1909/1960): a liminal and transitional space-time toward adulthood, Europeanism, and/or cosmopolitanism (Birindelli 2018). Indeed, when international students leave "home" (their comfort zone, usual living area, educational environment, etc.) and travel to a new place, they must adapt to a new ecological system with its social and cultural scenery. It is crucial to emphasize the liminal dimension (Turner 1969(Turner , 1977 for several reasons. The institutional endorsement given by Belonging to a collectivity National European Cosmopolitan global academia, family, friends, etc. (society at large we might say)-channeled mainly through internet and social media-constructs a framework of codified practices, procedures, and symbolic meanings for studying abroad. The life transition toward what we can call "cosmopolitan adulthood" 7 takes place within the travel path designed by the departure from the homeland, the transition to the host land, and the definitive or temporary return home. The liminal phase clearly takes place during the sojourn abroad, and the cosmopolitan status is achieved only after a series of highly codified steps, such as passing exams and earning the degree. When the young person (on the brink of adulthood in our case, thus young adult) completes this ritual-cultural path, family, friends, and the academic community (both at home and abroad) should recognize the new sociocultural status of international student. Analysis of the collected autoethnographies tells us that this is not happening. The international student's postliminal stage remains unclear and consequently, since it is a narrative, even the preliminal and liminal stages become hazy.
In his essays, Victor Turner introduces an interesting term, liminoid: the "successor of the liminal in complex large-scale societies, where individuality and optation in art have in theory supplanted collective and obligatory ritual performances" (Turner 1987, 29). Therefore, liminoid manifestations can challenge the broader social structure, a kind of cultural critique of the status quo. Can the studyingabroad experience be equated with Turner's "liminoid"? I doubt it. I have found no evidence that studying-abroad challenges the status quo: quite the opposite.
We can instead preserve the quasi-liminal meaning of liminoid experiences, in the sense that they are optional and do not lead to the resolution of a personal crisis or a change in status. Liminal events are ritual forms of cultural performance and involve society as a whole, whereas liminoid experiences are essentially transitional and the individual can choose to participate in or ignore them (Turner 1974). Furthermore, we should not lose sight of how the studying-abroad season is nested in the life passage from youth to adulthood. And however prolonged, fragmented, culturally diverse, global, etc., it remains an adulthood realized through a job and other concrete conquests. At the end of the 2002 movie L'auberge Espagnole, although the protagonist Xavier has, through his father, the chance of a good job in a ministry, he instead decides to pursue his childhood dream and become a writer. His first book is, of course, about himself and his postgraduate Erasmus experience in Barcelona. Through this narrative escamotage, the script unintentionally suggests that the protagonist gets stuck in his liminal time. The narratives of studying abroad suggest something akin to what Szakolczai (2017) called "permanent liminality." Under static, petrified conditions "change in the sense of creativity, innovation, and adventure, in a word, 'liminality,' is most welcome," whereas permanent liminality, to use a Foucauldian term is intolerable: "It generates a sense of stasis, meaninglessness; the more things change, the more they stay the same" (Szakolczai 2017, 244).
The Lack of Public Narratives: Scripts Without a Story
Interpretation of the international students' autoethnographies-the core of this study-was supplemented with secondary sources, adding layers of information, and using one type of data to validate or refine others (data triangulation). Analysis of all the qualitative empirical material took place within a broad framework of scholarly and nonscholarly inter-and multidisciplinary sources focusing the themes directly or indirectly related to the research. Therefore, along with the fieldwork, I collected and/or analyzed quantitative and qualitative data on young people studying abroad; scholarly articles, essays, and monographs; and nonscholarly material (books, movies, social media and travel blogs, documentaries, advertisements, music, tourist guides, audio-visual data, news media, documents, archives, etc.).
Within the departure-preliminal section of the in-depth interview, I also asked the participants about the kind of images and stories they had of the host city/country and the experience of studying abroad in general, and about the media sources of such representations, specifying that they could be anything. Here I was essentially trying to reconstruct international students' imaginary of the host city-country and of the studying-abroad experience by searching for "cultural objects" (Griswold 1994) that might have shaped their expectations. Subsequently, I concentrated my analysis of the collected autoethnographies on movies because of their power to shape a narrative and mold evocative representation of the overall experience abroad-extensive interpretation of this section can be found in Birindelli (2021).
The only movie mentioned (twice) representing the story of an international student was L'auberge Espagnole-Pot Luck or The Spanish Apartment in English. 8 I then conducted the secondary analysis of movies at national (Finnish, Italian) and international level concentrating on three key criteria: (1) portrayal of the story of the student-protagonist with a slight "coming of age" narrative approach; (2) pertaining at least to the comedy-drama-romance's genre-I discarded movies such as Lizzie McGuire's and all "romcom" (romantic comedies); (3) high level of international diffusion (10 or more national markets reached) and having a European city as movie setting. The analysis revealed only one possibility: The Spanish Apartment.
The interpretation of the collected autoethnographies and of secondary scholarly and non-scholarly sources connected with studying abroad reveals the absence of a clear-cut narrative of what it means to be an international student. We can find a series of related images, but not sufficient to constitute a leading narrative for students' life experiences in North Europe, South Europe, and Europe in general or elsewhere. It is possible to glimpse a vague cosmopolitan narrative, constructed on a global scale by different actors and institutions, upholding the generic validity of studying abroad for both instrumental and expressive reasons, and as an institutionalized rite of passage toward adulthood and global citizenship. However, it remains unclear what is the prize, the elixir, the treasure, or the lesson (Campbell 1968;Propp 1928Propp /1968) to be gained from the special "studying-abroad world" and what the young person is going to do with it in adult life. Consequently, the liminal state of studying abroad can be reconceptualized as either "liminoid" or "limbic." And this ritual interpretation is consistent with young people's endlessly prolonged mobile trajectories, at times leading nowhere from an economic and socio-demographic viewpoint (Cuzzocrea and Cairns 2020).
My analysis leads me to interpret the (quasi-) ritual of studying abroad as a script without a story, i.e., a structured story, such as a book or a movie. If this is the case, the implicit and preconscious meaning of studying abroad grows enormously. The young person's self-story lacks a center of narrative gravity, leaving the student-protagonist alone both in acting and telling his/her epic. As a result, the overall myth-ritual is sabotaged, and even the recognition of the new cosmopolitan status by the community (institutions, family, peers, etc.) becomes blurry. It is one thing to enact on the basis of a story, and another to enact in the absence of a story.
As stated at the beginning of this article, in 2019, 6.1 million tertiary students worldwide had traversed a national border. However, this vast social and cultural process gave birth to very few popular narratives apart from L'auberge Espagnole, where the protagonist Xavier, a 24-something French Erasmus postgraduate in Barcelona, shares the apartment with other students from England, Belgium, Spain, Italy, Germany, and Denmark.
The film's exclusive focus on young European Erasmus students already underlines the aims and limitations of what is supposedly a broad cultural and educational exchange. The emphasis on learning about "other" national cultures to achieve a more integrated European union quickly dissolves when the students abandon any interest in local culture, history or politics to focus instead on their own sexual and emotional rites of passage. (Ezra and Sánchez 2005, 137, emphasis mine) In the collected autoethnographies, we can assume the presence of word of mouth, oral stories, where students share accounts of their experience abroad with friends and family. And we can also see many experiences abroad shared through pictures posted on Facebook or Instagram. There are also written stories in social media posts and some students keep a travel blog with more descriptive and systematic updates of their life abroad. On the institutional side, universities have dedicated websites describing their academic programs along with information on the social and cultural life in the host city and country such as "10 reasons to study at the University of Helsinki." 9 In addition, each program advertises itself by posting videos with professors and past students as testimonials (ambassadors) along with other videos targeting international students.
Yet, the analysis of the collected autoethnographies left me with a feeling of narrative vacuum. There is some sort of apparent self-evidence about going abroad being the "right thing to do" in expressive and instrumental terms-expanding one's cultural horizons, enriching one's CV-but the narrative void creates a major obstacle to finding a plausible answer to one of the core questions of this study: "What is the meaning of studying abroad?" The narrative account is in fact the primary and most potent interpretative and cognitive tool that human beings, as socially and culturally situated subjects, can utilize to make sense of their life experiences (Bruner 1991). The time dimension, the self-narration, and the self/heterorecognition dynamic are pivotal to the concept of identity: lived experience has a pre-narrative quality and personal life is "an activity and a passion in search of a narrative" (Ricoeur 1991, 29). Following Levinas, Ricoeur (1992, 187) argues that there is "no self without another who summons it to responsibility." The Other who performs this action substantially expresses a judgment about the individual, placing him/her within a system of categories: gender, age, cultural belonging, social status, education, work, etc. Identification by others features different degrees of stability, depending on the extent to which one's social profiles can be defined, and is subject to constant review. Self-identity is therefore constructed and reconstructed in the dialog with this internalized Other, yet this inter-intrapersonal narrative necessarily fishes in the sea of public narratives fixed in books or movies (Birindelli 2022).
In this study, the lack of intersubjective, common scripts leaves the international students alone in the attempt to make sense of their experiences abroad. If identity is a process, a construction of-and through-the individual and collective memory framework (Halbwachs 1980), what happens when the collective, public box is empty, or filled with scattered fragments of stories? The international student seems to epitomize the typical late-modern subject who must find new ways of implementing a reflection on himself that appears multi-faceted, complex, and, in some respects, solitary (Birindelli 2022).
There is a hiatus between the cosmopolitan promise made by academic institutions and the intellectual, cultural means given to the international student. Moreover, in the untold story of studying abroad even Self-Other identity dynamics becomes blurred: which Other? Which Self? What happens, then, to the hyperbolically claimed goal of going abroad to expand one's horizons through the experience of a different Other? Are we talking about a local Other? Or is it "another like oneself": an international student from a different country, but who is essentially-and reassuringly-remarkably similar to yourself?
Findings and Discussion: Key Revelations from the Students' Narratives
My aim here is not to give a complete and exhaustive analysis of all the collected narratives but to disclose "snippets"; otherwise, the explanation of the research itinerary, of the qualitative tool, and of certain core interpretations would be too speculative, hence unclear and misleading.
The departure-preliminal section reconstructs the social and cultural background against which the decision to study and live abroad took place. The transition-liminal section of the template addresses the actual academic and overall life experience abroad, while the arrival-postliminal section probes a bond with a human being in the host culture and with a place that became familiar during the stay. In this section, I also encouraged the students to reflect on their immediate prospects: returning home, staying in the host country, or moving somewhere else.
The three autoethnographical phases of travel were introduced at the beginning of the in-depth interview by the collective identity pre-preliminal: here I asked the participants to give short descriptive accounts of the country of origin, hometown, host city and country, and north/south Europe, and to define the word "cosmopolitan." The autoethnography ends with the final free interpretation of self-identity and the experience abroad and with a sociographic appendix where participants in the research provided some simple but important information, such as their social class, parents' jobs and education, and family members' experiences of transnational mobility.
Collective Identity Pre-preliminal
For obvious reasons, territorial reflection is an ice-breaking and stimulating way to start the interview with international students. The question "where do you come from?" can be imagined as the entry ticket for participating in the abroad game: the lack of a clear original geographical boundary would not allow its transcendence.
IT (m, Middle East, Hel) thinks his country of origin is "complicated, difficult" and the people are "stubborn, aggressive, impatient, loud." Hometown clearly represents the past, and toward the end of the autoethnography (arrival-postliminal)-being a narrative study, the researcher needs to move up and down the entire autoethnography to make a thicker sense of ideas expressed in one part or the other-he writes "I really would like to stay here, but I can imagine myself also moving to other places, but not back to ***." While his home country is "boring, the past" and the people are "arrogant, cold, elitist," Finland is "a new home, village, cozy, fun" and Finns are "different, nice, friendly, inward." In the attempt to portray students' orientations toward their particular home-worlds and the wider cosmopolitan elsewhere, I sketched out a series of self-identity types connected to mobility experiences. That said, obviously none of the students falls completely within the analytical boundaries of a single type: their self-narrative simply reveals (to me) various characteristics of one or more types.
IT reveals a self-identity sketch partially encapsulated by the Runaway narrative type: someone who is escaping abroad for political or existential reasons (they feel strangers at home). IT also shows characteristics of the Academic type: he is intrigued by the idea of becoming a worldly intellectual, "Professionally, I want to do PhD, academic life"; in the next 10 years, he sees himself "In the beginning of an academic career, as a junior lecturer, with a young family." IT gives positive meanings to the word cosmopolitan: "An inspiring, optimistic vision." International students in Helsinki give either positive, neutral, or negative meanings to "cosmopolitan." Neutral meanings relate to the idea of metropolis, while negative connotations range from inequality to snobbish lifestyle. Positive meanings are usually associated, both in Helsinki and Florence, with open-mindedness, tolerance, and appreciation of diversity. A negative idea of "cosmopolitan" is absent for the Florentine group, where the neutral meanings prevail-with several students unable to give a connotation to the word: "Politics maybe… but I do not know what it means"; "A magazine? A drink? I heard the word, but I never thought about what it means." The meanings given to "cosmopolitan" are apparently ambivalent, divergent, and far from being shared.
The Departure-Preliminal
Here I reconnect the meanings of studying abroad to the participants' biography, moving beyond the sociological representation of a subject without a past, an "agent without a story." Students' biographical past is often, if not always, neglected in this field of study, whereas an authentically narrative approach is required to interpret students' overall experience abroad, which cannot be confined to a singular biographical timeframe.
The autobiographical accounts in the departure section reveal a sort of push-pull identity dynamic triggering the desire to travel, live, and study somewhere else, away from home. For the Fated, all the biographical premises push-pull toward the status of international student. As one student writes "I almost had no choice but to study abroad" (NS, m, East Asia, Hel). NS's parents met while the father was studying abroad during his bachelor's degree. Even his mother studied abroad in her youth to perfect a foreign language. Several relatives on the father's side had a studying-abroad experience, and his two sisters live in the USA after a study abroad period. NS's decision to study in Helsinki grew totally within his family culture: "I guess that this kind of experience was not alien for me. I guess they [parents] also wanted me to do the same thing, to study abroad." And there is a convergence also between family and country culture: "Because generally speaking if you get a foreign degree in *** you will be seen as more employable on the job market." Furthermore, even his hometown played a role in the decision to go abroad: "I am from the capital. There are a lot of people with a rich background, rich upbringing. So, the experience of living, of studying abroad is quite a common thing." NS's high school had an exchange program with a bordering country-"Already at a very young age I was socialized to the option of studying abroad"-and at his university, they support students financially to go abroad. NS had his first studying-abroad experience as an undergraduate in 2012 (6 months in a European country), so he is a studying-abroad "veteran"-like almost all the participants in the study. Sentimental life is no exception in NS's biography; his exgirlfriend was an international master student in a north European country. NS concludes "I guess it was not a fresh idea for me to study abroad": the Fated.
Narrative traits of the Runaway, the Academic, and the Fated types emerged already in the departure-preliminal section, and sometimes, as for IT, even in the pre-preliminal section. Other types were revealed later in the autoethnographies. I will restrict myself to synthetic passages for each of the remaining types; however, I beg you to indulge me on the Runaway, since this type brought to light an unexplored meaning for the international student. ZW (m, Central Europe, Hel) comes up with a definition of the Runaway international student.
I met many international students here. I got the feeling they came here because they did not like their life back at home. It's not that they said this out loud, I had the feeling that they were running away from something. Either because they are not happy with their home country, with the political situation, or maybe it's because of the personal situation.
For the Globetrotter, being mobile is an end in itself: the goal is the next city-country. HN (f, North Europe, Hel) tells us that she has already visited 50 countries. She also did a quite extraordinary exchange program in East Asia that represented a watershed moment: "When I was there I discovered that I really like traveling, living in another city." Before I went there, I never really considered leaving ***. I had kind of a plan, and a boyfriend, and a career path. I was going to be a psychologist. Then I went to *** and everything changed. I decided I don't want to do psychology and I don't want to stay in ***. I want to travel a lot and live in different places.
HN is not a Fated; her family culture cannot be considered highly mobile. However, her mother has always been encouraging: "She said I know you need to do this, you need to go. I guess she did some traveling when she was in her twenties that a lot of people would not have done." The Explorer had previous experiences abroad that do not fall within the social and cultural perimeter of the "studying-abroad world." ND (m, Oceania, Hel) worked overseas for long periods in places with a totally different climate from his home country. ND cultivates a strong desire to discover and understand places and people, always looking for new cultural challenges, both with other international people and with the locals, showing the capacity to reach out for the indigenous, thus, in his own words, "bursting the international students' bubble" away from the "mobility capsule" (Czerska-Shaw and Krzaklewska 2021). Compared to the Globetrotter, the Explorer decides to stay longer in "a" foreign country, and she/he is not interested in visiting as many countries as possible. ND genuinely wants to understand, absorb, and integrate with this particular culture, the Finnish one. He did his undergraduate exchange in another Finnish city, and even then he did not live only in the international students' bubble: "There are many reasons why it was a good experience, but I think one of the main good reasons is because I made friendship with Finns." The Lover is abroad because of the partner. ZN (f, Central Europe, Hel) was an undergraduate exchange student in Helsinki. She met her current Finnish boyfriend and now she has a clear idea why she is in Helsinki.
I am here because of my boyfriend. Originally, also because I loved to be surrounded by international people. There is a big gap between my Erasmus experience and my master's degree experience. In the end, I stayed because of my boyfriend.
The Worker is studying abroad for clearly instrumental reasons. ZH (m, South Asia) is in Helsinki "to study and have better job opportunities" and HV (f, South Asia) "I'm studying *** [a top-level master program] at the University of Helsinki and my whole purpose of being here is education." Contrary to what is commonly believed, studying abroad at Master level is not necessarily closely connected with the acquisition of skills in view of a job. After some exploratory in-depth narrative interviews, I decided to stimulate the students by asking them to answer the following straightforward question: "Why am I in Helsinki/Florence?" Only 13 students out of 50 wrote that they were abroad for academic reasons.
NV (m, Central Europe, Flo) has a dual cultural view shaped by his belonging to an ethnic and cultural minority. Even NV's narrative shows the identity traits of the Explorer. He is willing to face cultural challenges and he is able to interact with both international and local people. He is, of course, a study abroad veteran, and the decision to come to Florence was shaped by previous experiences that, in his case too (as for ND), do not entirely fall within the studying-abroad boundary: a typical narrative characteristic of the Explorer. NV, in fact, did 1 year of European Voluntary Service in a small town in Finland. Furthermore, his traveling is strongly motivated by an intellectual curiosity rooted in the field of study: he is abroad for a specific reason (typical of the explorer) that transcends generic instrumental ends (embellish the CV, job, etc.) or expressive goals (expand one's cultural horizons, growing as a cosmopolitan person, etc.). In Florence, NV is autonomously studying Machiavelli and doing some sui generis ethnographic work, visiting places connected to Machiavelli's biography. He is more attracted by the substance of intellectual life rather than having the Academic's fascination with the role. NV's drive to travel and live abroad is certainly shaped by his family background; his grandparents emigrated to *** in the 1960s. However, the most striking aspect of NV's story is that he is the only international student in the study who convincingly identifies his social class as "middle-lower"lower in my interpretation.
I was raised in a marginalized region of ***, it was full of migrants from Turkey, Poland, Russia. I felt rich in a cultural way. That made me curious about other countries. Growing up in that neighborhood in *** sparked the motivation why I am living abroad.
Thus, besides Explorer, I also identify NV with the narrative type Maverick: someone who comes from a middle-lower/lower-class family and does not share the highest common denominator of international students: an upper, upper-middle, or middle-class family with significant cultural capital.
I was having difficulty in interpreting the Florence group using only the narrative types I created for the international students in Helsinki, so I created an extra type only for Florence: the Tourist. The length of the stay (2 or more years) and their advanced student's status were not enough to sabotage the hermeneutic potential of the tourist narrative type. RH (f, EurAsia) writes: "The idea was to combine studying and traveling. I've been a tourist in Italy and I liked the country." And she adds: "Actually, Florence is not a good city to live in. It's good to see museums, art, history, but it is not a city for normal everyday life."
Transition-Liminal
The transition-liminal section explores different academic and life experiences abroad (city life, housing, friends, education, interaction with locals, social life, etc.). Here, and in the postliminal section, I will concentrate on emblematic autoethnographic passages dealing with social life abroad and the interaction-experience with locals.
QS (m, North America, Hel) reveals traits of the Explorer: he is abroad for a specific reason and his past experiences do not fall entirely within the studying-abroad boundary. QS is on track to become a pastor and he chose the Master in Religion, Conflict, and Dialogue. He had a privileged experience because he has personal interests and skills fundamental to his identity that allowed him to establish contacts with locals, thus escaping, even momentarily, the international students' bubble. It was his biography that allowed him to open "a" door to local culture, rather than institutional assistance from the University of Helsinki or students' associations.
Analysis of the autoethnographies reveals that you cannot experience the host culture holistically and vaguely. Access to local people takes shape only through an active attitude; it entails specific social skills, and "doing" things: performing (sports, religion, volunteering, a part-time job, etc.) rather than consuming or passively participating in social events of all sorts. QS goes to church every week and "through that I made connections with people that actually live in my neighborhood." This is a fundamental point; through the group of churchgoers, he can meet locals and get involved in their social life. It is "one kind" of Finnish people-believers who go to church-but it is an important breach in the cosmopolitan bubble toward a slice of the particular local culture. A social competence that is part of the subject's biography and self-identity becomes "a" key (not a passe-partout) to enter "a" door of the host society leading to some specific meaningful social space and group of people.
Later in the autoethnography, QS writes: One of the families I got to know at church invited me to their house for Easter dinner… I left my bike there for the summer. I am probably a little odd in that respect, I was able to develop relationships outside the academy, outside of the international students' group.
Sociologists tend to converge in defining cultural cosmopolitanism as an orientation of openness to foreign others and cultures, inspired by Hannerz's seminal study of cosmopolitanism as "an orientation, a willingness to engage with the Other" as well as "the aspect of a state of readiness, a personal ability to make one's way into other cultures … a built-up skill in maneuvering more or less expertly with a particular system of meanings and meaningful forms" (Hannerz 1990, 237-251, my emphasis). My study confirms that this "ability to make one's way into other cultures" is indeed personal. In this study, it seems that cosmopolitan orientation is provoked less by Hannerz's "intellectual and aesthetic stance of openness toward divergent cultural experiences" (1990,239) and more by the social skills that allow the international student to open certain cultural doors and engage with real people. Only through social activity can cosmopolitanism take on "a" shape. And, in an apparent paradox, cosmopolitanism is achieved only when students break free of the cosmopolitan study abroad bubble. This is what can be expressed as social cosmopolitanism enabling genuine and concrete contact with people of another culture, different age, social class, etc.
This cultural ability is personal and cannot be activated or facilitated by the host university. Students' associations in Helsinki have the important function of connecting international students with each other or with local students who appear remarkably similar to them. They coined a sociological label for this kind of Finnish student; as IT (m, Middle East) writes: "Most of my social life was and still is with international students. Now I have some Finnish friends. But I call them 'International Finns', they join our events and act as if they were foreigners here. We use the formula Internationally Minded Students." That said, the importance of international students' associations as a key socializing function is undeniable, making it possible to build the community of cosmopolitan students abroad. Although associations cannot guarantee an entry into local social life, they do bestow a sense of identity and community abroad and a form of engagement that is not just recreational or consumerist. Based on this study, the absence of such associations in Florence is seen as a minus. KS (m, East Asia) writes: "One of the strange things here is that there are no students' associations. I've been in a student organization for all my academic life back in Asia. Student life basically does not exist here." OY (f, West Asia) adds: "That I know, there are no students' associations, students' life is not organized. That is a problem, I could not find a community to join." The presence of numerous Erasmus groups with leisure connotations (party, happy hours, discos, touristic trips) underscores the recreational nature of social life in Florence. International education does not stop at the gates of the university campus, and this side of the social and cultural experience abroad also needs to be addressed, especially in a tourist city like Florence.
Arrival-Postliminal
Part of the arrival-postliminal section consisted in portraying a bond with a human being in the host culture and with a place that became familiar during the stay. Here too I concentrate on autoethnographic passages focused on social life in general, and the interaction-experience with locals. I also explore the perception of the host city, attempting to see how the urban space might have shaped social relations and the connection between the experienced social and cultural space and international students' future.
For the human bond, most of the students in Helsinki indicated a classmate or the group of fellow students abroad. Besides QS, who created a strong connection with the family he met at the local church, most of the bonds are with people within international academic confines; I believe this is quite normal and understandable. There is also, however, another recurrent strong form of relationship: the boyfriend/ girlfriend. In these cases, the partner is always a Finn and represents a disclosure of the local world. Otherwise, it is rare to find a student who creates a strong bond with a local.
Regarding the non-human bond, Helsinki is considered a functional and livable city, and most of the international students truly appreciate this feature. IN (m, South America) feels a connection with the city of Helsinki because "It's a peaceful and safe city. I can walk in the streets without worrying that I can be robbed or mugged or something." The narrative passages dedicated to the city of Helsinki reveal a general appreciation of the harmony between the city and the natural world, along with the cultural vivacity, good public transport, and its overall functionality. However, what emerges from the narratives is a generic reference to "places by the sea" and-besides the obvious familiarity developed with the neighborhood where they live or university buildings and communal areas-none of the students developed a strong form of emotional attachment to a specific place, either because of its beauty or its comforting or reassuring function.
International students in Florence somehow managed to create more human bonds with local people. Beyond the typical creation of strong forms of relationships within the in-group of studying-abroad students, somehow the international students bubble seems less impermeable in Italy. Moreover, bonds do not follow the sentimental relationship path as in Helsinki, where several students had an indigenous partner. KS (m, East Asia), besides other students from his home country, "made a bond with Italian friends, we study together." XO (m, South Asia) was able to establish a strong connection with "Four people… They are there for me anytime. A guy from ***, a guy from ***, and ***, she is Italian." KG (m, Africa) made "a strong bond with three Italian guys. It's a strong bond that will last forever." Friendship with locals involves mostly classmates and other Italians presented by them. It is apparently easier to become friends with an Italian than with a Finnish classmate. Because of the absence of students' associations in Italy, and the touristic tradition of the city, at the beginning of this study I imagined exactly the opposite. A possible alternative interpretation in line with the findings is that, while international students' associations in Helsinki (promoted and fostered by the university) represent a plus in many senses, at the same time they strengthen the spontaneous and conventional ties with other international students and contribute to foster the construction of a sort of enclave. Conversely, the "institutionally abandoned" international student in Florence is almost forced to establish contacts with locals, while Italian students feel almost compelled to reach out to their international peers.
The non-human bond with the city reveals another antithetical cultural dynamic between Florence and Helsinki. WW (f, EurAsia) writes "I have a strong bond with the city. I love Florence. Even if I am alone, I would still want to live here. In Florence I do not need people, the city is all I need"; MU (m, South America) echoes "I feel connected. I love Florence. I really do love this city." The group of international students fell in love with the city of Florence. ON (f, East Europe) comes up with a generalization: "All international students have one thing in common: they all love the city of Florence. That is the first reason mentioned by any international student I met," and later in the autoethnography adds: "Nobody I met mentions studying as a reason to come to Florence. The only reason is the love for the city." This brings us to the next narrative prompt regarding reflection on the immediate prospects: returning home, staying in the host country, moving somewhere else. Students in Florence would stay for the beauty of the city, the lifestyle, etc., but do not see Florence (Italy in general) as a suitable place to concretely construct their future life. DA (m, Africa) writes: "After the master I will go back home. I would like to stay here but as for job opportunities is better to go back home. Staying here with a master's degree will not make a difference." Florence seems to represent a liminal moment painted with the colors of a vacation from real life. Some students are ready to make compromises to stay, which I interpret as the desire to prolong the vacation. One student mentions Milan as a possibility, a few mention the language barrier, but in my interpretation the main obstacle is that they do not see Italy as a stage for an ordinary everyday life where work has a pivotal role. Italy is the perfect country for an extraordinary life: a vacation. This probably explains why I created an extra type only for Florence: the Tourist.
Only one student indicates the charm of the city of Helsinki (or Finland/Finnish lifestyle) as the main reason for being abroad, whereas almost half the Florence students fell in love with the city, country, lifestyle, etc. The passion for Florence (Tuscany, Italy) has the contours of a confirmed expectation, while the enthusiasm for Helsinki (Finland) seems to develop during the stay, without any kind of premise molded by representations and images found in the media in the broad sense (movies, documentaries, books, internet, social media, etc.). However, international students saw Helsinki and Finland as somewhere they could build their real life. Several students wanted to remain in Finland after the Master but foresaw obstacles in terms not of skilled job opportunities, as with Florence, but of language: "There are not that many jobs for English speakers in Finland. So far I haven't got any job I have applied for… But I haven't closed the door to staying in Helsinki just yet" (PW, f, South Asia).
Beyond job opportunities, sentimental relations are obviously decisive for the Lover type: "My boyfriend's father has a big company here. Here I have almost zero job chances. I feel I have to decide between my boyfriend and my career" (ZN, f, Central Europe). Runaways rule out the possibility of going back home. IT (m, Middle East) writes "I really would like to stay here, but I can imagine myself also moving to other places, but not back to ***", and adds that "I want to do PhD, academic life." As noted, IT's autoethnography also reveals the traits of the Academic type. Although in qualitative studies numbers do not make much sociological sense, it is interesting to observe how in the Helsinki group one international student out of three wants to pursue an academic career.
The Academic is not a secondary narrative type in this study. Participants in the research spent the past 10 years in international education at different levels. Academe probably constitutes an important, if not the most important, "province of meaning" in their paramount reality of everyday life (Schutz 1962(Schutz /2012. The master program does not signify a bridge toward a professional working life, but a required step to do a PhD and remain within academe, which is a big part of the "real life" they have experienced so far. During the in-depth interviews, I asked students to imagine the recognition of the self-identity status change brought about by the life experience abroad. Interestingly, the majority (both in Helsinki and Florence) imagine such recognitions within their private sphere: family and friends. Rather than institutional others or generalized others (Mead 1934), it is a recognition from significant others (Sullivan 1953) in the personal sphere: "My mom will tell me that she is really proud of me, of where I am and what I am doing" (PW, f, South Asia, Hel); "My family and my boyfriend. Even my friends from university. They appreciate what I am doing abroad" (OY, f, West Asia, Flo).
In the last 20 years or so, universities worldwide have been deeply engaged in some sort of internationalization process. So, why are international students, the protagonists of this process, not duly celebrated? There are generic references to the importance attributed to an international Master degree in the home country, but the scarcity of more formalized recognition of academic achievements and change of status is evident. Interpreting studying abroad as a rite of passage, a postliminal form of recognition within the academic community is clearly missing, tending to sustain the liminoid or limbic characteristic of studying abroad. The absence of celebratory occasions attended together by the academy, family, and friends-the community as a wholecasts uncertainty and ambiguity on the newly acquired status of cosmopolitan student. Again: what is it? What's the story?
Even this final section of international students' autoethnographies reinforces the already stressed interpretation of a "rite without a story": in this case, what is missing is "the end" of a season of life, chapter, etc. The absence of any ritualistic forms of recognition-either by the host university abroad or the home university-is striking when compared with the vast studying-abroad phenomenon: a core, if not "the" core, administrative objective of universities worldwide.
The value of studying abroad sometimes seems confined to the international students' group: "International students that are living like me, that are in the same existential situation give value to what I am doing here" (MU, m, South America, Flo). As for the outgroup, studying abroad sometimes has a meaning that stands between generic instrumentality and distinction practice, and the latter is far from unusual in this study.
If I go back to *** it will be very cool. Makes me stand out from the crowd. If I go somewhere in Europe it might be cool as well. Because many people know that a Finnish degree, at the University of Helsinki is very good. (EP, f, EurAsia) Why throw out "cool" as a possible interpretation for the popularity of studying abroad? Isn't it also cool to be cosmopolitan rather than local or parochial? More than 20 years ago, Bauman (1998, 2) wrote that "mobility climbs to the rank of the uppermost among the coveted values" and that "the freedom to move, perpetually a scarce and unequally distributed commodity, fast becomes the main stratifying factor of our late-modern or postmodern times." I finally pushed the international students of this study to stretch their existential imagination into the future, 10 years later-where? doing what? etc.-trying to guess the meaning of the studying-abroad experience in that envisioned future, hence exploring a possible shift of significance during their imagined adulthood. This final part helps us to rebalance the academic and scholarly vision of international students as the spearhead of the cosmopolitanizing process, a vision that is sometimes overly stretched, to the point of seeing them as political actors or cultural brokers instead of simply young people on the brink of adulthood. What we find in the last section of their autoethnographies is often a family, a job, a nice house, and some good friends. Ten years from now, "I hope that at least I have a house. A family. And a job that I like. Hopefully in Finland and in a nice area of Helsinki" (QK, m, South Europe). The projection into the future features the desire to have a normal, good life, stressing the overall self-identity growth of the studying-abroad stage on the way to adulthood. If you stay at home you do not develop your potential, you need to go abroad. There is so much to see and experience and you need to do that in person, otherwise you remain with just a poor, superficial and most of the time wrong idea with media images of the world, of other people. (OY, f, West Asia, Flo) It is self-growth. The cultural challenges and the experience of the cultural Other-albeit sometimes another that looks a bit too much like you-make international students grow as young adults. Since they are not social scientists or political activists, why should it be otherwise?
Conclusions: Overcoming Hermeneutic Obstacles and Contribution to Literature in the Field
In this article, I have presented an overview of a comparative and qualitative study of the academic, social, and cultural experience of international master students. By reconstructing the temporal bridge between the phase of life abroad and students' lifespan tout court, the narrative-biographical approach pursued fills a current knowledge gap within the multi-faceted interdisciplinary field studying youth, human and educational mobility, cultural globalization, and cosmopolitanism. In this research, the time abroad is conceived as a building block in students' self-identity construction and as a stage of their human development: young, young adult, adult. By yoking together in a truly narrative guise time and geography-past-home, present-abroad, and expectations for the future either in the host country, back home, or elsewhere-I was able to interconnect time and space dimensions and overcome a knowledge gap even in this sense.
Narrative-biographical studies of this kind are rare, and this scarcity constitutes the first hermeneutic obstacle: the researcher has no solid scholarly and fieldwork props to lean on. It is only by reconstructing students' stories that we can reach a broader and deeper understanding of the academic and overall experiences abroad within the concrete context of young people's lives. Only a narrative-biographical study can bring to light hidden meanings and open hermeneutic itineraries that go beyond the international student mobility "axiom." The current pandemic-induced pause calls for a questioning of the assumed significance of mobility and a reconceptualization of the mobility-immobility dyad in new conditions of "disimagined mobility," where portraying a clear and attractive vision of a grounded transition to adulthood becomes problematic (Cairns et al. 2021).
The conceptual mesh and the method I propose, or a similar one, appears to be a viable interpretive path for acquiring a holistic understanding of this social and cultural phenomenon. The narrative template I created for this study is at once simple and comprehensive. It can certainly be improved. However, I believe that the basic narrativebiographical structure should be kept intact: it is impossible to reconstruct a reliable portrait of the impact of studying abroad on students' life without a systematic reference to their past, present, and future. Since we are addressing mobility, I am also confident that the hermeneutic overlay with the phases of travel is valid-with analytical attention to ritual (liminal/liminoid/limbic) conceptual meaningsbecause the international students I met in this study are trying to travel in many senses: geographically, culturally, socially, and existentially.
Subsequently, I pointed out other interpretative hindrances. The vacuum of qualitative studies of this kind is surrounded by another narrative void: the absence of public and popular stories (mainly novels and movies) on studying abroad: the second obstacle. The lack of such tales undermines our capacity to reconstruct in a wellrounded fashion the traits of the protagonists of the studying-abroad story: the international students. My attempt to create narrative types such as the Fated, the Explorer, and the Runaway stems precisely from this awareness. Of course, it is an attempt that can be improved but at the same time I believe it cannot be easily dismissed. While waiting for the storyteller to write new poetic scripts for our heroes, we must try to reconstruct in a prosaic academic way the profile of the young person as a character in his/her life story.
There are two more major obstacles to this kind of research that have so far appeared only between the lines of this article. The third obstacle is easy to grasp: people out there are not eager to tell their life story to any passing researcher and write an autoethnography. Thus, it takes time, hard work, and reciprocal trust to carry out the fieldwork. Once you have succeeded in finding participants and conducting in-depth interviews, I see giving the full transcription back to participants as the only possible way to prompt the writing of a partial autoethnography. Afterwards, we can meet, and do focus groups and the like. But the gist of this kind of study lies in the transformation of an oral story (the in-depth interview) into a written story (the autoethnography), with the commented full transcription as a middle step.
The fourth obstacle is the required multidisciplinary approach. Once the researcher gains a sufficient knowledge to carry out fieldwork and interpret from different disciplinary angles-in my case sociology, social and cultural anthropology, psychology, and narrative studies-it is hard to publish the results in journals. Editors and peer reviewers will always ask you to be "on top" of each of the disciplines they are expert in. People doing this kind of fieldwork can be seen as "detectives" looking for clues and linking them together, after which they present their evidence to the academic theorists, whose role is to connect different disciplines into a new, understandable, whole.
As for some of the main studying-abroad meanings revealed by this narrative and autoethnographic research, I would synthetically point out the following. (1) The existential significance of studying abroad: rather than being instrumental (studying in view of a highly skilled job) or expressive (expanding one's cultural horizons and alike), the experience abroad seems more the search for a sort of personal promised land within a culturally globalized world. (2) The scarcity or absence of postliminal institutional forms of recognition for the academic experience abroad: due to the vastity of international students' mobility worldwide, one would expect more systematic rituals of cultural-structural recognition for young academic achievements. (3) The extreme relevance of personal skills grounded in the students' biography that allows some of them to competently perform in a field (sports, religion, volunteering, a part-time job, etc.) giving access to a slice of the host culture (what I called social cosmopolitanism) rather than vaguely consuming the culture or passively participating in social events of all sorts. (4) The uncertainty of the meanings attached to "cultural cosmopolitanism" within the overall life experience abroad and the need for effective operationalization of the concept: the "internationally minded students" label that appears in students' autoethnographies portrays a cosmopolitan group separated from the local Other.
I believe this study has made a contribution to refinement of the concept by tracing the contours of the "cosmopolitan student" on the brink of adulthood, or at least of the international students I met. And the students I met have their ideals, their dreams, their fears. Despite being highly mobile, they are all searching for their place in this world and, however you want to put it, this "niche" consists of a job, a house, a partner, and some good friends. Moving away from all of this would be a big sociological mistake.
Attempting to draw a synthetic conclusion from what I studied, my point is that "out there" in the stormy cosmopolitan sea what's missing most of all is the image of a "good life," a life worth living even far from the bright lights. I do not see how one can become cosmopolitan, a "good citizen" of a global world without stories of a "good life" that are not a mirage.
Funding Open Access funding provided by University of Helsinki including Helsinki University Central Hospital. This work was supported by H2020 Marie Skłodowska-Curie Actions, grant number 702531. Permalink: https:// cordis. europa. eu/ proje ct/ id/ 702531.
Declarations
Ethical Approval Ethical Review Statement 11/2016, University of Helsinki Ethical Review Board in Humanities and Social and Behavioral sciences, Helsinki 15/3/2016. The ethical and data protection approvals, along with the information sheet and consent form, can be found at the research blog: http:// cultu ralex perie nceab road. blogs pot. com/. In this article, narrative passages from the autoethnographies are quoted with the initials of the pseudonyms.
Competing Interests
The author declares no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
v3-fos-license
|
2021-05-05T00:08:35.092Z
|
2021-03-22T00:00:00.000
|
233635995
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://rajpub.com/index.php/jssr/article/download/8983/8167",
"pdf_hash": "50fd9691c2e12cfc6c9c313020b05b9ed614055d",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46394",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"sha1": "722bac72ded12d0a69e352181524a065d0f7a9b7",
"year": 2021
}
|
pes2o/s2orc
|
A Qualitative Study of Nursing Student and Faculty Perceptions of Attrition
Institutions of higher learning struggle to supply enough Registered Nursing Professionals to meet demands in today’s healthcare environment. Hundreds of thousands of students are accepted into nursing programs each year, though many fall short of program completion. High attrition and low retention in registered nursing programs is a problem. The purpose of this qualitative study was to evaluate attrition and retention of registered nursing students over a four-year period at a Technical College in Georgia to determine factors impacting successful completion of the Associate of Science in Nursing Program (ASN). It further proposed to identify possible solutions to reduce attrition among nursing students. This study analyzed nursing student and nursing faculty perceptions on the causes of high attrition and low retention and perceived solutions to attrition rates. This study provides insight in the development of steps to decrease attrition in registered nursing programs, and other higher education programs of study.
Introduction
Nursing has been in a continual battle to remain a respected profession due to the challenge of acquiring and retaining highly qualified nursing professionals (DeLack et al., 2015). To meet this challenge institutions of higher learning are tasked with retaining nursing students to program completion and subsequent passing of the National Council Licensure Examination for Registered Nurses (NCLEX-RN). High attrition and low retention in nursing programs has remained problematic which affects completion rates and ultimately the supply of nursing professionals in the workforce (Peruski, 2019).
The problem addressed by this study was high attrition in undergraduate registered nursing programs. Multiple issues have been identified that increase attrition and lower retention in health education programs of study, namely nursing. Among some of the reported causes are stress and anxiety and often the learner's inability to remain motivated (Senturk & Dogan, 2018).
Nursing programs have lost students as a result of a variety of stressors in academic, social, and/or external environments. It has also been concluded that stress, coping, anxiety, and lack of coping skills affect students' ability to learn and thus contributed to attrition (Labrague et al., 2017;Turner & McCarthy, 2017). The anxiety that looms while learners are in nursing programs can intensify as they progress to the end of the required coursework (Li et al., 2015).
Despite admission criteria that should ensure success of the student, nursing programs have continued to lose students at alarming rates. Retention and attrition in higher health education remains a focus as it has a direct effect on student, institutional, and societal success (Northall et al., 2016).
The purpose of this study was to evaluate attrition and retention of registered nursing students between 2015 and 2018 at a Technical College in Georgia to find what factors impact successful completion of the Associate of Science in Nursing Program (ASN). The study has utilized semi-structured interviews as a means to recognize how students and faculty interpreted and found meaning in their own unique life experiences and feelings in regard to attrition (Jacobsen, 2017). High attrition and low retention in community and Technical College nursing programs play an integral role in the nursing shortage as new nurses must be educated to take the place of an aging licensed registered nurse population.
The study was designed to explore the factors students identified as reasons for not sustaining until program completion and their correlation to what faculty perceived as barriers to completion for nursing students. With this information, an added goal of this study was to identify possible best practice solutions to reduce attrition among nursing students in Georgia (Simplicio, 2019).
There is a growing shortage of qualified nurses. To continue to combat the shortage of capable registered nurses, nursing programs must acquire and retain students until full completion of the program. On average fifty-eight percent (58%) of high school graduates enter college each year and less than forty percent (40%) of them complete their degrees as planned (Thompson, 2018). In 2010, the Institute of Medicine (IOM) issued a report stating that by the year 2020 at least eighty percent (80%) of the registered nursing population would have at least a Bachelor of Science Degree in Nursing (BSN) at the entry level (Orsolini-Hain, 2012). The reality is that progress toward 80% of registered nurses having a BSN has been made, but it is still short of the goal. It is speculated that the 80% target may be accomplished by 2029 (Thew, 2019). Community and Technical Colleges currently provide the majority of the registered nursing workforce throughout the nation making attrition and retention a hot topic for discussion for Associate of Science in Nursing (ASN) programs. If retaining students for two years is a problem, surely four-year colleges and universities are seeing similar if not higher attrition rates.
The research revealed a number of factors that are related to the climbing rates of attrition in ASN programs. Stress and anxiety can play a role in high attrition and low retention at the collegiate level (Brussow & Dunham, 2018;Kukkonen et al., 2016;Li et al., 2015). The anxiety that looms while learners are actively in nursing programs can intensify as they progress to the end of the required coursework and begin to prepare for Health Education Systems Incorporated (HESI) exit exams and subsequently the NCLEX-RN (Doyle et al., 2019;Li et al., 2015). HESI exams are administered throughout and at the end of many undergraduate registered nursing programs. HESI exams are intended to assist students in preparation for the NCLEX-RN and are often high-stakes exams that can serve to progress or halt nursing students in accredited nursing programs (Dreher et al., 2019). This leads to an increased mental burden on students and educators alike causing more anxiety. The anxiety within nursing programs has a direct effect on attrition rates as the stress of the HESIs and NCLEX-RN spills over into the coursework, impacting students' ability to fully concentrate (Dreher et al., 2019;Sears et al., 2015).
Another factor relative to stress and its link to high attrition in undergraduate nursing programs is the experiential learning practicums or clinicals as well (Lipsky & Cone, 2018). Nursing is a practice profession, making the clinical experience an implicit portion of the learners' educational journey. Clinical experiences are paramount to the success of the registered nursing student. Practical experiences coupled with gender and age have been recurrent themes that emerge when nursing student attrition is studied (McKeever et al., 2018). A mixed-methods approach to studying nursing student attrition exposed three themes relative to experiential learning experiences: ineffective placement organization, problematic journeys to placement, and disappointing clinical experiences. Although not the only reason, practice related problems consistently are listed as part of the reason retention rates in nursing programs remain low (McKeever et al., 2018). Other reasons for learners leaving nursing programs prior to completion include academic challenges, burden of life demands as well as educational demands, financial strain, negative experiences, lack of support, and illness or injury (McKeever et al., 2018). Another cause for high attrition is discomfort among gender minorities, including male students, feeling pressured by stereotypical gender ideals which, when coupled with isolation, prompt attrition (Ferrell & DeCrane, 2016).
As more and more colleges compete for higher numbers of admissions, many students with academic deficiencies are being admitted and permitted to enter various programs for which they are not academically prepared. One result of this behavior is high attrition and low retention. Research shows that over sixty percent (60%) of the nation's healthcare workers are educated in a community college program (Vedartham, 2018). The curriculum for those programs often if not always includes some component of an anatomy and physiology (A&P) course as a prerequisite for entry. The attrition rates in these course subjects may have a correlation to attrition in subsequent health sciences occupational programs of study (Vedartham, 2018). Studies show that close to half of all students that enroll in an A&P course do not persist to completion. The aim of community colleges is to appeal to non-traditional learners thus they must remain cognizant of the incidence of educationally and academically underprivileged students being admitted and predisposing their programs to at risk learners (Vedartham, 2018). Healthcare professions rely heavily on science and math, therefore retaining learners in the requisite A&P courses is paramount. An ex post facto study examined over two hundred former nursing students' records along with qualitative data from ten full-time faculty members, thirty new graduates, and forty-five directors of associate degree nursing programs. Findings asserted a link between performance in two pre-program biology courses and three components of the pre-admission test for nursing with high attrition (Vedartham, 2018). The analysis found various strategies to assist in improving attrition and retention in A&P courses that include learners taking an assessment test to evaluate readiness for the course. With this test atrisk learners are identified and required to take two preparatory biology courses along with a technology course prior to being allowed to participate in the required A&P course (Vedartham, 2018). The foundational knowledge gained from the pre-courses prepared learners for the rigor of the A&P classes. The study and positive results allowed the community college to obtain a grant to support the program by providing dedicated faculty as well as free tuition to the students for the pre-courses. The overarching result was over ninety-five percent (95%) of the learners who participated in the program were successful in the A&P courses (Vedartham, 2018). Success in core requisites is a precursor to successful completion of higher education programs of study (Chan et al., 2019).
When learners are unable to complete their programs of study it not only affects them, but the higher education institution as well. For the learner the impact can be financial as well as social. For nursing programs in general, when a student does not complete, it continues to intensify the already dire shortage of practicing nurses. This effect is multifaceted and universal as the ultimate concern is lagging patient/client care (Tower et al., 2015). The literature review suggests nursing is attracting more non-traditional, diverse learners with more intense needs than educational institutions are able to support, thus leading to non-completion (Ferrell & DeCrane, 2016). It was found that attending college for non-traditional learners is akin to culture shock as the learners come into the program ill-prepared for the workload. A study by Tower et al. (2015) examined nursing students in their initial semester and tracked them to find critical markers linked to attrition. Interventions such as early identification of at-risk students, effective orientation, and mentoring were among programs available to enhance the quality of the learning experience and lower rates of attrition. This study included two hundred twenty-three (223) first semester nursing students from one Australian University of which seventy-eight percent (78%) were full-time students. It was found that of the learners who studied full-time failure of at least one course was a predictor of attrition. The study afforded students with critical at-risk markers interventions including tutoring, increased orientation, access to blended learning opportunities, and the choice to study parttime. The result was a 3.25% increase in retention from the previous year before the interventions were implemented (Tower et al., 2015).
Materials and Methods
This qualitative method study was designed to gain insight concerning high attrition rates among ASN students in a Technical College. The study collected data via face-to-face and telephone interviews with students who were previously enrolled in the ASN program at the study institution. Moreover, in depth face-to-face and telephonic interviews were held with current ASN faculty to gain insight from the faculty perspective. The interviews were used to provide comprehensive information about the participants' experiences and viewpoints associated with the ASN program (Christenbery, 2017). IRB approval was obtained prior to collection of data.
Population and Sample
This study relied on voluntary participants who met the inclusion criteria. To be included in the study, student participants must have been actively enrolled in the ASN program at the research institution between 2015 and 2018. Also, to be included the learner must have failed to sustain through degree completion in the ASN program. The inclusion criteria for faculty participants included being an active faculty member in the ASN program within the timeframe of 2015 to 2018. The Technical College supplied the researcher with the contact information of previous nursing program students who met the research criteria. A letter was sent out to prior nursing students who meet the specified criteria via email or regular US postal mail from the researcher with a request to participate in an interview with the researcher. The potential population of students consisted of up to fifteen male or female students who met the research criteria over the age of eighteen. More qualitative data came from interviewing participating ASN faculty whose identification was made possible through the Technical College's faculty portal. The faculty also received a letter detailing the implications of the research study and upon their voluntary participation, interviews were scheduled. The faculty participant sample was comprised of up to twelve faculty members who met the inclusion criteria
Instrumentation
Interviews were used with students and ASN faculty who met the inclusion criteria. The guided interview questions were altered as necessary during the active interview process as the questions were used as instruments to assist in gathering insight into the ASN learner experiences as well as the faculty experiences with the ASN learners within the nursing program (Rudestam & Newton, 2015). The semi-structured interviews were completed orally face-to-face or via telephone and the researcher took notes of the responses during the interview.
Data Analysis
The data obtained from the interviews was analyzed to find patterns of meanings through the use of coding. A computer assisted data analysis software (CAQDAS) package was utilized called IBM Statistical Package for Social Sciences (IBM-SPSS). Once completed, any recurring themes were identified and interpreted. All results were shared with the host institution. Table 1 presents a descriptive analysis of student demographic and student related characteristics. Data indicated that the average student was 29.50 (SD=5.93, MIN/MAX=22-38) years of age, female (n=7, 70.0%), and of a White racial identity (n=5, 50.0%). Most students also held a highest degree of a diploma (n=4, 40.0%), was currently enrolled part-time (n=5, 50.0%) attended in the years 2017-2018 (n=5, 50.0%), and withdrew in the years 2017-2018 (n=5, 50.0%). Seven semi-structured interview questions were asked to student participants to better understand "What influences students' ability to complete the registered nursing program in an Associate of Science in Nursing (ASN) Program at a Georgia Technical College?"
Variable
When asked "How did you find out about the nursing program?" The two most common themes were family or friends (n=5, 50.0%) and school in local area (n=5, 50.0%). These themes are reflected in the comments "From my spouse as she graduated from the college's PN program and she encouraged me to apply" and "Nursing has always been a goal and I started looking in my local area in 2013" respectively.
When asked "What attracted you most to this nursing program?" The most common response was location (n=4, 40.0%). One student commented "Location was the biggest factor for me." The most common theme reflected in response to "Describe how you felt when you made the decision to attend this nursing program" was "excited" (n=5, 50.0%). One student stated that "Excitement was the first feeling as I felt I could finally help my family which is Spanish speaking. I wanted to be able to help my culture and I was finally going to be able to do that." When asked "Once you were in the program, describe the methods for you to keep track of your progress" the most prominently mentioned was Blackboard (n=9, 90.0%), the learning management system used in the course. One student stated that they "lived on blackboard and bannerweb looking at grades and also listening to professors to see what was coming next." In regard to the question, "What type of guidance was available to you throughout the nursing program to help ensure your success?" instructors (n=10, 100.0%) was the overwhelming response. One student stated, "I can honestly say the instructors helped me a lot with extra guidance and resources." When asked "How were you able to manage going to school with your other life obligations for example home, work?" the most common response was that they were "unable to manage" (n=5, 50.0%). One student said "I tried to take it one step at a time: try to put the stress of it all aside. I really wasn't able to do that though." In response to "How flexible was the program in being able to meet your needs? Do have specific examples?" all student participants indicated that the program was "rigid and strict" (n=10, 100.0%). One student said, "There was a rigid class schedule with no deviations but it was beneficial for life scheduling."
Table 2 Descriptive Analysis of "What influences students' ability to complete the registered nursing program in an Associate of Science in Nursing (ASN) Program at a Georgia Technical College?" Themes Within the Qualitative Data Provided by Students (n=10)
Five semi-structured interview questions were asked to student participants to better understand "What do (ASN) students report as ways to address the causes of attrition at Georgia Technical College?" When asked "Did the nursing program or the faculty have any ways to help you be successful in your nursing courses?" the majority of students mention the program's instructors (n=8, 80.0%). One student mentioned that "The faculty had extra worksheets, very supportive of us, encouraged us to talk about it with others." In response to, "If you were having difficulty did the faculty reach out to you or did you have to reach out to faculty for assistance?" both directions of communication were mentioned equally (faculty, n=6, 60.0%; student, n=6, 60.0%). Faculty led communication was commented on by one student saying, "For the most part faculty did, but there was one class that did not and the main instruction was study more; it was not a focused response on you specifically," however another student said "The student had to reach out for the most part by setting up a meeting for them to work with us after class. I was never turned down for a meeting." In response to, "What other types of support were available to you?" the predominant theme was scholarships or financial aid (n=6, 60.0%). One student said, "Financial Aid for me and I had no out of pocket expense for school." When asked, "What do you feel is your major reason for leaving the program?" the most prevalent theme was HESI (n=6, 60.0%). One student commented "HESI Examination -the 2 attempt and done approach. HESI writes your fate. HESI retakes were often in the same week which left little to no time for preparation." In response to "What support or services do you recommend to help students like yourself sustain in the nursing program until completion?" the most common theme was HESI as percentage (n=4, 40.0%). A student described "If HESI was actually a percentage of the grade and not the end all catch all."
Descriptive analysis of faculty participants
Demographic and professional characteristics of faculty participants was obtained and analyzed ( Six semi-structured interview questions were asked of faculty participants to better understand the question "What do ASN program faculty perceive as reasons for high attrition and low retention of ASN program students at a Georgia Technical College?" When asked "Tell me little about yourself and why you chose to become a nurse educator," the most common theme was had worked as educational staff (n=3, 42.9%). For example, one faculty member reported "I had worked in Long Term Care for years and did a lot of teaching with staff and was told by them that I was a good teacher. When I returned to school to get my MSN my project led me to being an educator because I worked with a program for family connections. I became a Parent Educator on drug abuse and addiction and then I got the job here." In response to "What do you think are the best methods to recruit students for the ASN Program?" the majority of faculty reported word of mouth (n=4, 57.1). For example, one faculty member reported "I think they hear about how fast they can complete the program and that is a big incentive as they want to make a good income fast." When asked, "Describe what you have done to mentor ASN students?" most faculty described Rapport (n=4, 57.1%). One faculty member said "Taking a personal interest in the students. Providing individual assistance in class. Building an official relationship with each student. Also helping them to see the light at the end of the tunnel." In response to "Describe what you do to ensure success with your students?" the most common response from faculty was Mentoring (n=4, 57.1%). One faculty member said "Meet 1:1 with at risk students to show them things that would be helpful for them to work on. Also have them complete the form after a test to make them accountable to acknowledge areas of weakness." When asked, "What programs are offered by the college to help student be successful?" most mentioned the Tutoring Center (n=5, 71.4%). One faculty member said "The tutoring center, but it only has English and Math tutors, I think. Also, advisement." In response to "Have you noticed any areas where students were seeking support and they were not able to find it?" responses were varied financial (n=2, 28.6), full faculty support (n=2, 28.6), and no areas needing support (n=2, 28.6). Three semi-structured interview questions were asked of faculty participants to better understand the question "What do ASN program faculty report as ways to address the causes of attrition at a Georgia Technical College?" When asked, "What do you perceive as the major reason(s) students fail to complete the ASN program?" the two most common themes were life issues (n=4, 57.1%) and lack of preparation (n=4, 57.1%). In describing "life issues" one faculty member said "Personal issues -our current student population has a lot of life matters that impact them like finances and them having to work to support a family. Lack of study skills, lack of proper training for instructors" while another faculty member describing "lack of preparation said "Many of the students are not prepared educationally. They have been spoon-fed the education up to now and passed on. Also, many are not prepared mentally for what it takes to get through nursing school. Family life such as divorce and also they are often the primary caregiver and only income provider." In response to "Can you tell me some of the experiences that you have had with students who have eventually left the program?" faculty were equally split between positive experiences (n=3, 42.9%) and negative experiences (n=3, 42.9%). One faculty member describing a positive experience said "When it is obvious that they are not going to be successful; taking time to talk to them and one in particular who came back some time later to thank me for guiding them to another career path that they had been successful in and happy in that career." Another faculty member describing a negative experience comment "I have had a lot: unfortunately, the negative ones stick out. I have had a couple military students who suffered from PTSD and one in particular got really angry at me and kicked the door. He was kind of used to being in charge and did not want to take instruction. A lot come into the program and say that it is more than they expected." When asked, "What methods of support do you think would help to decrease attrition and increase retention?" the most common theme was a Tutor (n=3, 42.9%). One faculty member said, "Perhaps have a dedicated nursing tutor; smaller class size to allow for more 1:1 assistance; and also, maybe a 'better' selection/filtering process for admission."
Conclusions
There is a concerted effort among stakeholders to include institutions of higher learning, healthcare organizations, and society to provide a suitable nursing workforce to meet the demands of the population worldwide. Because community colleges provide the majority of the entry level registered nurses, it is imperative that they along with other shareholders join forces to combat high attrition and low retention in nursing programs (Aulck & West, 2017; Harrell & Reglin, 2018; Tamari et al., 2020). The primary objective of this basic qualitative study was to ascertain student and faculty perspectives as to the causes of high attrition and low retention in undergraduate nursing programs of study. The study further proposed to discover best practice methods to curtail high attrition and low retention rates at the research institution. The information garnered from the study has the potential to positively impact attrition not only at the research institution's nursing program, but at nursing and multiple allied health programs throughout South Georgia.
High attrition and low retention are not new phenomena in higher education (Turner & McCarthy, 2017). Prior research has identified multiple factors that contribute to these alarming rates across all programs of study. The research concluded that stress, coping, anxiety, and lack of coping skills affect students' ability to learn and contributes to elevated attrition rates (Labrague et al., 2017;Turner & McCarthy, 2017). Registered Nursing programs are not unlike any other higher education program with its experience retaining students to program completion. As nursing tries to maintain its standards as an esteemed profession, students' abilities to sustain until graduation continues to wane. High attrition and low retention in nursing programs has remained problematic which has a detrimental effect on completion rates and in due course the supply of nursing professionals to meet the demand in the workforce (Peruski, 2019). Technical and community college nursing programs have attempted to answer the call to produce a practicable nursing work force; however, high attrition and low retention Consistent with prior research, this study supported all of these previous reports. The current basic qualitative study substantiated many of the previous findings but shed light on some methods that the research institution had not attempted that might prove beneficial in lowering attrition rates in the ASN program. The primary theme that was recurrent in both student and faculty responses was the presence of a tutor in the tutoring center that would be specific for health sciences and nursing majors (Tower et al., 2015). Students and faculty both agreed that a formal faculty-student mentoring program could help in the research institution's quest to improve students' capacity to complete the ASN program (Bice, 2018;Ingraham et al., 2018;Tower et al., 2015). The most recurrent theme from students regarding their major reason for having to leave the ASN program was the inability to make the program required score on the high stakes course final exams, the HESIs. The students did not feel that the HESI should be taken away, but that it should only count a specified percentage of their final course grade. Conversely, no faculty respondents reported the HESI as a major reason for students failing to sustain in the ASN program (
Data Availability (excluding Review articles)
The qualitative data supporting the findings of this study were gathered from a Technical College in Georgia. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Conflicts of Interest
There are no conflicts of interests.
Funding Statement
There was no funding attained for this research.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2016-01-24T00:00:00.000
|
17261521
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/grp/2016/8780695.pdf",
"pdf_hash": "c2cf0b29647b7fc17a779cb613991d27f43f8f4d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46395",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "35e19fa6ef4d75846202e644fa3836508ee208aa",
"year": 2016
}
|
pes2o/s2orc
|
The Association between Helicobacter pylori Infection and Chronic Hepatitis C: A Meta-Analysis and Trial Sequential Analysis
Purpose. Helicobacter pylori is a common gastric disease-inducing pathogen. Although an increasing number of recent studies have shown that H. pylori is a risk factor for liver disease, the potential association between H. pylori infection and chronic hepatitis C still remains controversial. The aim of our meta-analysis was to evaluate a potential association between H. pylori infection and chronic hepatitis C. Methods. We searched the PubMed, Embase, CNKI, Web of Science, and the Cochrane Central Register of Controlled Trials (CENTRAL) databases between January 1, 1994, and May 1, 2015. Results. This study included a total of 1449 patients with chronic hepatitis C and 2377 control cases. The prevalence of H. pylori was significantly higher in patients with chronic hepatitis C than in those without chronic hepatitis C. The pooled odds ratio was 2.93. In a subgroup analysis, the odds ratios were 4.48 for hepatitis C virus- (HCV-) related cirrhosis and 5.45 for hepatocellular carcinoma. Conclusion. Our study found a strong association between H. pylori and chronic hepatitis C, particularly during the HCV progression stage; thus, we recommend active screening for H. pylori in patients with chronic hepatitis C.
Introduction
Helicobacter pylori (H. pylori) is a Gram-negative spiralshaped bacteria that colonizes the gastric mucosa and can induce chronic gastritis, gastric ulcers, and gastric malignancy [1]. Worldwide, approximately 50% of the population is infected with H. pylori [2]. Both developing and developed countries have a high incidence of H. pylori infection [3]. The rate of H. pylori infection correlates significantly with socioeconomic conditions and the age at infection. In addition, all H. pylori-infected individuals subsequently develop gastritis [4].
Hepatitis C virus (HCV) was first recognized in 1989. To date, nearly 180 million people worldwide have been infected with HCV. If not well controlled, chronic hepatitis C (CHC) can progress into cirrhosis and ultimately into hepatocellular carcinoma (HCC) [5,6], which might lead to higher patient mortality rates [7]. To date, no effective medicines have been developed to prevent the progression of CHC to cirrhosis and/or HCC because the mechanism by which cirrhosis or HCC occurs is not fully understood. However, the hepatitis viral load, genotype, and infection duration are known to play important roles in the development and progression of HCC [8].
Increasing evidence suggests that H. pylori may be a risk factor for the development of cirrhosis and HCC in patients with CHC [8][9][10]; however, some researchers have suggested that H. pylori might not contribute to the mechanism of HCVrelated HCC [11,12]. The relationship between H. pylori and CHC remains controversial, and it is unclear whether H. pylori is associated with CHC and the progression of HCVrelated cirrhosis and HCC. In this study, we aimed to confirm the associations between H. pylori and CHC and to further assess the relationship of H. pylori with HCV-related cirrhosis and HCC through a meta-analysis. Therefore, the results of this study might propose clinical medicine approaches for patients with CHC who might develop cirrhosis or HCC.
Eligibility Criteria.
We searched PubMed, Embase, Web of Science, China National Knowledge Infrastructure (CNKI), and Cochrane Central Register of Controlled Trials (CENTRAL) databases from January 1, 1994, to May 1, 2015. We used keywords or subject headings to search for "hepatitis C" and "helicobacter pylori or helicobacter species." There were no language restrictions. We also screened bibliographies of selected original studies, review articles, and relevant conference abstracts.
Citations were merged in Endnote version X7 (Thomson Reuters, New York, NY, USA) to facilitate management. Two reviewers (J. Wang and Y.-X. Zheng) independently applied the inclusion criteria to all retrieved studies. Disagreements between reviewers were resolved by consensus. For each eligible study, the following criteria were set and reviewed independently by the reviewers: (1) Eligible study designs were randomized controlled trials (RCTs), case-control studies, or cross-sectional studies comparing H. pylori-related morbidity between patients with CHC and a control group with nonhepatic disease. (2) Concrete numbers of cases and controls and a H. pylori positivity rate were included. (3) The study groups exhibited definite HCV positivity, whereas the control groups were HCV-negative. (4) H. pylori was detected in all study groups through serological or PCR testing. (5) Studies presenting information exclusively about patients undergoing liver transplantation, viral hepatitis (e.g., hepatitis A, B, or E virus), human immunodeficiency virus, acute HCV, autoimmune liver disease, nonalcoholic fatty liver disease (NAFLD), and other types of hepatitis were excluded.
Data Extraction and Quality Assessment.
For each eligible study, the following data elements were selected: first author, publication year, country, H. pylori detection method, the number of cases and controls, the prevalence of H. pylori infection in cases and controls, and numbers of HCC, cirrhosis, and noncirrhosis cases among patients with CHC. Twelve studies, all case-control studies, were included after consensus was achieved between the two reviewers. The observational study quality was assessed using the Newcastle-Ottawa Scale (NOS) [13,14]. This scale scores studies across three categories: selection (four stars), comparability of study groups (two stars), and assessment of outcome/exposure (three stars). The star rating system is used to indicate the quality of a study, and a maximum score of nine stars is possible; the studies included herein were graded on an ordinal star-scoring scale with higher scores indicating a higher quality.
Statistical Analysis
RevMan 5.2 (Nordic Cochrane Centre, Cochrane Collaboration, Copenhagen, Denmark) was used to perform the metaanalysis. We used TSA, version 0.9 (Copenhagen Trial Unit, Copenhagen, Denmark), to assess the required information size. Data were combined when appropriate and displayed in forest plots. Risk ratios (RRs) were used to assess risk in cohort studies; odds ratios (ORs) were provided for casecontrol studies and were regarded as approximate RRs in this meta-analysis. Study heterogeneity was tested using the chisquare test and 2 statistics. If significant heterogeneity (chisquare test, < 0.10 and 2 > 50%) was found, the randomeffect model was used for the analysis; if the heterogeneity was considered to be insignificant (chi-square test, ≥ 0.10 and 2 < 50%), on the other hand, the fixed-effect model was used. Studies with substantial heterogeneity ( 2 > 50%) were considered unsuitable for the meta-analysis. Subgroup and sensitivity analyses were used to discover the source of heterogeneity. Confounding factors were segregated for further analysis.
Result
Twelve case-control studies were included from among a total of 159 studies identified through online database searches. Most ineligible studies were excluded on the basis of information in the title or abstract. The selection and analysis process is shown in a flow diagram in Figure 1. The characteristics of included studies [15][16][17][18][19][20][21][22][23][24][25][26] are shown in Table 1.
H. pylori Positivity Rates of the HCV and Control Groups.
In the meta-analysis of 12 studies (Figure 2(a)), a pooled OR of 2.93 (95% confidence interval [2.30, 3.75]; = 0.05) was determined, indicating a 2.93-fold higher H. pylori positive rate among patients with CHC than among healthy controls. Whereas all 12 studies exhibited significant heterogeneity ( 2 equals 45%) in the total analysis, we divided the included studies into two subgroups based on the H. pylori detection method. We found that the subtotal pooled OR of the PCR test subgroup was obviously higher than that of the serological test subgroup (4.78 versus 2.89) and the 2 values of both subgroups nearly 50% (56% and 47%, resp.). The major source of interstudy heterogeneity might, therefore, have originated from other confounding factors. As shown in the study characteristics (Table 1), three articles [19,20,26] limited the HCV groups to patients with cirrhosis or HCC. Therefore, disease progression might be an influencing factor. We conducted a sensitivity analysis using a gradual elimination process and found that if we excluded these three studies [19,20,26], the heterogeneity decreased to the lowest value shown in Figure 2(b) (total group 2 = 12%, serological group 2 = 0%). According to the above information, patients with CHC appear to have a nearly 2.59-fold higher H. pylori positivity rate when compared to healthy controls. However, disease stage was the main factor influencing the H. pylori positive rate among patients with CHC. Therefore, further analysis of the extracted data according to HCV progression stage is needed.
Subgroup Analysis Based on HCV Progression Stage.
To attenuate the influence of the HCV disease stage on the meta-analysis, we further generalized the extracted data from included studies to a noncirrhosis group, cirrhosis group, and HCC group, all of which were compared with the control group. Forest plots of all meta-analysis are shown in Figure 3. .67]; = 0.14). All 2 values for the above analyses were <50% ( 2 = 5%, 20%, and 42%, resp.), indicating that the studies exhibited low heterogeneity with respect to each other in the three meta-analyses. The extracted data were suitable for analysis, and the results were reliable because of the deep analysis enabled by disease stage ranking.
Trial Sequential Analysis.
Trial sequential analysis (TSA) combines conventional meta-analysis methodology with repeating significance testing methods applied to accumulating data in clinical trials. TSA uses cumulative -curves to assess relationships with conventional significance boundaries ( = ±1.96), the required information size, and the trial sequential monitoring boundaries. In our trial sequential analysis, the type I error risk was set at = 0.05 with a power of 0.80. Relative risk reduction (RRR) was defined by the average incidence of included studies at 10%. Our determined required information size was 5407. In fact, the included number was 3826. From Figure 4, we concluded that although the -curve did not reach the required information size (RIS), it crossed the trial sequential monitoring boundary and conventional boundary, which show the result was truepositive.
Evaluation of Publication Bias.
A funnel plot is used to assess publication bias for outcomes >10. We included 12 studies in this meta-analysis. This evaluation is shown in Figure 5. The graph appears as a symmetrical inverted funnel, indicating a very low risk of publication bias.
Discussion
H. pylori infection is a major global health issue. In addition, the World Health Organization has recognized H. pylori as a group 1 carcinogen contributing to gastric cancer since 1994 [27]. Furthermore, numerous lines of evidence indicate an association between H. pylori and liver disease, particularly CHC [28]. Although an association between H. pylori infection and CHC has been reported, it remains controversial. Heterogeneity: 2 = 6.88, df = 4 (P = 0.14); I 2 = 42% Test for overall effect: Z = 7.18 (P < 0.00001) that the histopathological changes were more severe in patients with H. pyloripositive HCV than in patients with HCV alone [29]. Furthermore, Umemura et al. demonstrated that the eradication of H. pylori could increase sustained virological responses in patients with CHC [30]. Moreover, many studies conducted in vitro demonstrated a cytopathic effect of H. pylori on hepatocytes [31,32]. Therefore, our results are consistent with the above viewpoints.
Moreover, the subgroup analysis demonstrated a 4.48fold higher H. pylori incidence rate (95% CI: 3.49-5.74) among patients with HCV-related cirrhosis relative to the control group, which strongly indicates a correlation between late-stage cirrhosis and H. pylori infection. However, two [16,19] of the 10 studies reported no association between HCV and Child-Pugh cirrhosis staging, whereas the remaining studies did not discuss this issue. However, all included
Subgroups
Serological test 16sRNA PCR test articles strongly indicated a correlation between late-stage cirrhosis and H. pylori infection. Furthermore, El-Masry et al. reported an increasing incidence of H. pylori as the Child-Pugh class increased [33]. Given the limited number of articles, future studies should address the relationship between H. pylori infection and the Child-Pugh class in patients with HCV-related cirrhosis.
Finally, the OR was 5.45-fold higher in the HCC group than the control group. This finding obviously indicates a strong link between H. pylori and HCC. Many clinical observational studies have reported the detection of H. pylori in abnormal liver tissues, particularly cancerous tissues [28]. A meta-analysis conducted by Xuan et al. demonstrated a positive association between H. pylori and HCC. The patients with HCC in that study had diverse etiologies. Only two of the articles included in that study were related to HCVpositive HCC [34]. Therefore, the observations of Xuan and colleagues agree with our study findings.
Our study had some potential limitations. Firstly, the final results of this meta-analysis were greatly affected by the limitations of the included publications. Although we attempted to identify all related studies, it is possible that some escaped our attention. Furthermore, some studies with negative results might not have been published, which would cause unavoidable publication bias. In addition, all articles identified in our search were in the English and Chinese languages, which may have caused language bias. Regarding the control group source, most included studies used blood donors as controls, although some studies used hospital samples. This might have also generated bias.
Secondly, the H. pylori detection methods differed among the included studies, a factor that might have led to different positivity rates. Some of the included studies detected and diagnosed H. pylori infection via the serology method, whereas others used PCR-based methods. Usually, the serology method is less sensitive and specific [35]. Meanwhile, both H. pylori and H. hepaticus belong to the Helicobacter genus [36]. H. pylori might share genetic sequences with other Helicobacter species such as H. hepaticus. Therefore, diagnosis of H. pylori infection via PCR alone is not sufficient.
Thirdly, nearly all included articles were case-control studies. The quality of this type of study is lower than that of randomized clinical trials. In addition, both age and socioeconomic status have been considered high-risk factors for H. pylori infection [27]. El-Masry et al. reported that H. pylori-infected individuals older than 40 years had a high risk of severe HCV-related disease [33]. However, Weiming only included older people in their study, whereas the studies conducted by Yong-Gui et al., Shang-Wei et al., and Pellicano et al. included subjects aged 20-89 years; none of the other included studies mentioned the subjects' ages. The results would have been better if those authors had balanced the incidence of H. pylori infection with respect to ages. Moreover, the patients in our study resided in different countries, including China, Italy, France, Egypt, Sweden, and Japan. Therefore, the H. pylori incidence might have varied with respect to socioeconomic differences. This might have caused the misestimating of the odds ratios.
In conclusion, our meta-analysis demonstrated a positive association between H. pylori infection and CHC. In particular, it also revealed strong correlations of H. pylori infection with HCV-related cirrhosis and HCV-related HCC. We suggest the importance of H. pylori screening of patients with chronic hepatitis, particularly those with HCV-related cirrhosis and/or HCC. Obviously, given the limitations of the included studies, the role of H. pylori as a risk factor in the progression of CHC remains unclear. We recommend conducting additional randomized controlled trials and prospective studies to clarify the effect of this pathogen on CHC development.
analyze the data, Ning Li, Rong-Rong Zhou, and Yan Huang revised the draft, and Ze-Bing Huang and Xue-Gong Fan contributed to supporting the study. Juan Wang and Wen-Ting Li contributed equally to the work, and they are the cofirst authors.
|
v3-fos-license
|
2018-04-03T01:18:28.716Z
|
2014-12-26T00:00:00.000
|
18317065
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0115876&type=printable",
"pdf_hash": "95a6f752c7390b63582a10e1663137656b599e94",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46400",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"sha1": "95a6f752c7390b63582a10e1663137656b599e94",
"year": 2014
}
|
pes2o/s2orc
|
Assessment of [125I]WYE-230949 as a Novel Histamine H3 Receptor Radiopharmaceutical
Histamine H3 receptor therapeutics have been proposed for several diseases such as schizophrenia, attention deficit hyperactivity disorder, Alzheimer's disease and obesity. We set out to evaluate the novel compound, [125I]WYE-230949, as a potential radionuclide imaging agent for the histamine H3 receptor in brain. [125I]WYE-230949 had a high in vitro affinity for the rat histamine H3 receptor (Kd of 6.9 nM). The regional distribution of [125I]WYE-230949 binding sites in rat brain, demonstrated by in vitro autoradiography, was consistent with the known distribution of the histamine H3 receptor. Rat brain uptake of intravenously injected [125I]WYE-230949 was low (0.11 %ID/g) and the ratio of specific: non-specific binding was less than 1.4, as determined by ex vivo autoradiography. In plasma, metabolism of [125I]WYE-230949 into a less lipophilic species occurred, such that less than 38% of the parent compound remained 30 minutes after injection. Brain uptake and metabolism of [125I]WYE-230949 were increased and specific binding was reduced in anaesthetised compared to conscious rats. [125I]WYE230949 is not a potential radiotracer for imaging rat histamine H3 receptors in vivo due to low brain uptake, in vivo metabolism of the parent compound and low specific binding.
Introduction
Histamine is a neurotransmitter in the central nervous system that regulates its own release and synthesis via a presynaptic G-protein-coupled histamine H 3 autoreceptor [1]. The histamine H 3 receptor also acts as a heteroreceptor regulating other neurotransmitters, such as acetylcholine [2,3], noradrenaline [4], dopamine [5] and serotonin [6]. There is evidence for histamine H 3 receptor dysregulation in several diseases, including Alzheimer's disease and vascular dementia where a negative correlation between fronto-cortical H 3 receptor density and cognitive decline exists [7,8]. There is increased histamine H 3 receptor binding in the substantia nigra of patients with Parkinson's disease and in the prefrontal cortex of schizophrenic patients [9,10] and histamine H 3 receptor knock-out worsens multiple sclerosis symptoms in a mouse model [11]. Currently there are several histamine H 3 antagonist/inverse agonists in Phase II and III clinical trials for conditions such as excessive daytime sleepiness in Parkinson's disease; cognitive dysfunction in attention deficit hyperactivity disorder, schizophrenia and Alzheimer's disease; and metabolic dysfunction in obesity and diabetes mellitus (reviewed in [12,13]). However the role of histaminergic brain networks in disease is still poorly understood as highlighted in a recent review which calls for further analysis into histamine receptor expression in specific neural pathways [14]. To this end a readily available radiopharmaceutical which could longitudinally measure histamine H 3 receptor expression in different disease states would be useful.
Several attempts have been made to develop single photon emission computed tomography (SPECT) and positron emission tomography (PET) radiotracers for the histamine H 3 receptor (reviewed in [15]). However until recently translation into human studies has proved difficult due to a number of factors. The development of SPECT radiotracers for the histamine H 3 [17]. A meta-iodo substituted benzophenone derivative (4-(3-(1H-imidazol-4-yl)propyloxy)phenyl 3-iodophenyl methanone) was described with nanomolar affinity for the histamine H 3 receptor, but to our knowledge has not been taken forward for further development [18].
In particular, for PET imaging, thioperamide analogues [ 18 F]VUF5000 and [ 18 F]VUF5182 exhibited very low brain uptake: less than 0.02%ID/g [19]. [ 11 C]JNJ-10181457 had low specific binding possibly due to its binding affinity being in the high nanomolar range [20]. [ 18 F]Fluoroproxyfan bound heterogeneously in the rat brain, and striatal, thalamic and hypothalamic binding was displaceable by unlabeled fluoroproxyfan in vivo, although cortical binding was not displaced [21]. [ 18 F]Merck 2b and [ 18 F]9 ([ 18 F]XB-1) have shown promise in mouse and monkey models but have not as yet been take forward into clinical assessment [22,23]. Recently, two carbon-11 radiotracers have successfully been used in clinical trials. [ 11 C]GSK189254 has been used to quantify histamine H 3 receptor availability in humans [24] and to monitor the target engagement and pharmacokinetics of a number of H 3 antagonists in humans and primates [25,26,27]. [ We aimed to evaluate [ 125 I]WYE-230949 as a potential radionuclide imaging agent for the histamine H 3 receptor. WYE-230949 (Fig. 1) is a benzimidazolesubstituted 1,39-bipyrrolidine benzamide, a group of compounds that have been shown to act as high affinity antagonists at human histamine H 3 receptors and 1000-fold selectivity for human histamine H 3 over H 1 and H 2 receptors [29,30]. WYE-230949 contains an iodine in the 79 position of the imidazole ring, has a Ki for human and rat histamine H 3 receptors of 0.3 nM and 5.7 nM respectively, and a molecular structure suitable for radioiodination [29,31]. To control for the effects of isoflurane anaesthesia on radiotracer uptake we performed studies in anesthetised and conscious animals. We present here the in vitro, ex vivo and in vivo evaluation of [ 125 I]WYE-230949 in rats.
Reagents
Sodium [ 125 I]iodide in dilute (0.1 M) NaOH was purchased from Perkin Elmer Life and Analytical Sciences, Boston, MA (specific activity 81.76GBq/mM); iodophenpropit dihydrobromide was purchased from Tocris Bioscience, Bristol, UK. Other reagents and chemicals were purchased from Riedel de Haën, Seelze, Germany and Sigma-Aldrich, Gillingham, UK, and were used without further purification. WYE230949 and the tributylstannyl precursor of WYE-230949 were obtained from Wyeth Research, Princeton, NJ.
Radiolabelling of WYE-230949
The radiolabelling of WYE-230949 has previously been reported [31]. Briefly, to a V-vial containing 74-370 MBq of Na[ 125 I] made up to 50 ml with 0.05 M NaOH was added 20 ml of 1 M HCl, 0.3 mg of the tributylstannyl precursor of WYE-230949 in 100 ml ethanol, and 50 ml chloramine-T solution (1 mg/ml). The reaction was mixed via vortex and incubated at room temperature for 5 min, after which it was quenched by addition of 200 ml of mobile phase. The reaction mixture was analysed by analytical HPLC before purification by preparative HPLC. The fraction containing the desired product was collected and the solvent was removed by rotary evaporation. The product was reconstituted with 0.9% saline, and finally passed through a 0.22 mm filter. The radiochemical purity was determined by analytical HPLC and the specific activity of the final product was calculated using a concentration-response curve obtained using the corresponding cold standard.
For in vitro studies, carrier WYE-230949 was added in order to obtain the higher concentration of WYE-230949 required for K d determination. Unlabelled WYE-230949 was added prior to purification by preparative HPLC in order to allow the accurate measurement of specific activity from the preparative HPLC trace. For in vivo studies of [ 125 I]WYE-230949, 6 mg ascorbic acid was added to during the formulation and the radioligand prepared as described above stored in a refrigerator and protected from light until use. Determination of the partition and distribution co-efficients (logP and logD) The lipophilicity of [ 125 I]WYE-230949 was determined in octanol/water and at physiological pH using a modification of the shake-flask method [32]. Briefly, [ 125 I]WYE-230949 was mixed with 1 ml of octanol and 1 ml of water (for logP) or with 1 ml of octanol and phosphate buffer (100 mM, pH 7.4). The radioactivity incorporated in both phases was determined using a gamma counter (Cobra Gamma Counter, Packard, Perkin Elmer Life and Analytical Sciences, Boston, MA, USA). The assay was performed in triplicate on 3 separate occasions. LogP was calculated to be log [counts in octanol/counts in water] and logD was calculated to be log [counts in octanol/counts in pH 7.4 buffer].
In vitro radioligand binding
Animals (male Sprague-Dawley rats; Harlan, UK) were killed by cervical dislocation. Whole brains were immediately excised and placed into ice-cold Tris-HCl buffer (Tris 50 nM, pH 7.4; EDTA 5 mM), homogenised and centrifuged at approx 40,000 g for 10 min at 4˚C (Beckman J2-21M/E). The pellet was resuspended in 25 ml of buffer and centrifuged as previously, then the final pellet was resuspended in 10 ml of ice cold Tris buffer and stored at 250˚C until required. Protein content was analysed using Bio-Rad reagent; absorbance was read on a spectrophotometer at 562 nm [33]. [ 125 I]WYE230949 binding assays were carried out in a final incubation volume of 500 ml containing Tris-HCl buffer (Tris 50 nM, pH 7.4; EDTA 5 mM), brain homogenates (1.47 to 2.27 mg/ ml of protein) and [ 125 I]WYE230949 (0.04-102 nM). Non-specific binding was defined in the presence of 10 mM iodophenpropit. Samples were incubated at 30˚C for 30 min, then filtered rapidly through Whatman GF/B filters (pre-treated with 0.3% polyethylenimine; Aldrich, UK) using a Brandel M24R cell harvester and the filters washed with 364 ml ice-cold Tris buffer. Radioactivity on the filters was determined by liquid scintillation counting a minimum of 48 h after filtration. All assays were performed in triplicate. Specific binding was determined by subtracting non-specific binding from total binding at each concentration; K d and B max values derived using GraphPad Prism (version 4.03), and expressed as mean ¡SEM of the triplicate assays.
In vivo administration of [ 125 I]WYE-230949
Animal experiments were conducted under the UK Animals (Scientific Procedures) Act 1986. This work was approved by the ethical review committee at the University of Glasgow (PPL: 60/3436) prior to the start of study. Naïve male Sprague-Dawley rats (Harlan, UK), weighing 217 to 352 g were either conscious or anaesthetised (2.5-3% isoflurane via a tracheostomy and artificial ventilation in 70/30% N 2 O/O 2 ) for the duration of the experiment. Anaesthetised rats had their respiration and temperature monitored throughout. [ 125 I]WYE-230949 was administered via a femoral vein cannula to anaesthetised rats or via the lateral tail vein to conscious rats. The injected dose for each rat was calculated by measuring the syringe in a dose calibrator before and after [ 125 I]WYE-230949 administration. The radioactive and radiochemical amounts administered to individual rats ranged from 12.6 to 24.5 MBq and 0.32 to 0.70 mg/kg respectively in all in vivo studies.
In vivo microSPECT imaging
MicroSPECT imaging was performed in order to image the uptake of [ 125 I]WYE-230949 in the brain over time. SPECT scanning was performed using a MollyQ 50 microSPECT scanner (Neurophysics Corp., Shirley, MA, U.S.A.). Rats were anaesthetised for the duration of scan (2-3% isoflurane in 70/30% N 2 O/O 2 ) and physiological parameters were maintained as described previously [34]. The scanning protocol was designed to ensure that a forebrain slice encompassing structures dense in H 3 bindings sites including cortex, striatum, hippocampus and thalamus was captured; the first slice being 14.4 mm caudal to the eyes, approximately at the level of the caudate nucleus. Scanning was commenced 12 min prior to intravenous injection of [ 125 I]WYE-230949 via the tail vein. Ten sequential coronal slices (400 mm/slice) were collected over a total scanning distance of 4 mm with a scanning time of 12 min per repetition. This protocol was repeated over the same 4 mm slice for a total of 17 repetitions. SPECT images were co-registered with corresponding T 2 -weighted MRI images obtained from a strain-and weight-matched rat using anatomical landmarks from a previously registered whole brain scan. MRI was carried out on a Bruker Biospec 7T using a T 2 -weighted sequence with an isotropic resolution of 300 mm. The MR image set was manually aligned to the SPECT images set using AMIDE (A Medical Image Data Examiner) freeware. Each post injection SPECT stack was compiled to produce a 170 slice stack of the entire scan over time using Image J freeware (NIH, USA). In order to determine whole brain uptake a region of interest (ROI) encompassing cortical, striatal, hippocampal and thalamic structures was defined using anatomical information on the MRI data set. The mean intensity of the ROI in each slice of the SPECT images was measured by plotting a z-axis profile. These were averaged over 12 minute time bins to produce a mean intensity for the entire 4 mm slice. These were converted to emissions per second per mm 3 using the scan scaling factor and then to disintegrations per second per ml or Bq/ml. Finally these values were expressed as % of injected dose per ml of tissue and plotted against time.
Ex vivo biodistribution
Ex vivo biodistribution studies were performed in both conscious and anaesthetised rats in order to investigate the effect of anaesthesia on [ 125 I]WYE-230949 uptake. At 30 or 120 min following [ 125 I]WYE-230949 administration rats were killed by cervical dislocation. These time points were chosen based on the biological half life of [ 125 I]WYE-230949 determined from microSPECT imaging studies. Serial arterial blood sampling via a femoral artery cannula was performed in anaesthetised rats only. Blood was collected into heparin-coated tubes for analysis of plasma and into EDTA coated tubes for analysis of whole blood. At either 30 or 120 min following [ 125 I]WYE-230949 administration rats were killed by cervical dislocation and the brain, heart, lungs, kidneys and liver dissected. Samples were counted on a Gamma Scintillation Counter (Packard Cobra II D5010, UK) and counts per minute (cpm) converted to kBq using a standard curve. Radioactivity was corrected for decay of 125 I and expressed per sample weight and as a percentage of injected dose (%ID/g tissue).
Ex vivo autoradiography Regional brain uptake was determined by ex vivo autoradiography, studies were performed in both conscious and anaesthetised rats in order to investigate the effect of anaesthesia on [ 125 I]WYE-230949 uptake. At either 30 or 120 min following [ 125 I]WYE-230949 administration rats were killed by cervical dislocation. The brains were removed, frozen in isopentane at 242˚C and stored at 220˚C before sectioning (20 mm) in a cryostat (Bright Instrument Company Ltd). Three sections were taken every 400 mm for the entire length of the rat brain and thaw mounted onto poly-L-lysine coated slides. Sections were dried at room temperature and apposed to X-ray film (Kodak Biomax MR-1) for approximately 2 weeks. The resultant autoradiograms were analysed using computer-based densitometry (MCID, Imaging Research, Canada). Relative optical density (ROD) measurements were obtained from 14 ROIs defined with reference to a rat brain atlas [35]. For each brain region examined, bilateral readings were obtained in triplicate and the average reading determined. ROD values in the cerebellum were used as a measure of non-specific binding. Specific binding was calculated as ROD in ROI/ROD in cerebellum.
In vitro autoradiography
Sections (20 mm) cut from a frozen rat brain were pre-incubated in Tris-HCl buffer (50 mM, pH 7.4; EDTA 5 mM) for 15 min then incubated in 5 nM [ 125 I]WYE-230949 for 90 min at room temperature. Non-specific binding was defined in adjacent sections in the presence of 5 mM iodophenpropit dihydrobromide. Sections were washed (2630 min in Tris buffer) and briefly (15 s) rinsed in distilled water before drying and apposition to Kodak Biomax for 24 h. These autoradiograms were generated only for visual comparison with those obtained ex vivo.
E vivo analysis of rat plasma and brain samples
After intravenous injection of [ 125 I]WYE-230949 blood samples were obtained by terminal cardiac puncture from anaesthetised rats or via the lateral tail vein from conscious rats at either 30 or 120 min. Rats were then immediately killed by cervical dislocation and the brain removed. Plasma was separated from whole blood by centrifugation and the protein precipitated by combining 400 ml of plasma with 400 ml of ice-cold acetonitrile. A sample (200 ml) of the supernatant was made up to 1 ml with distilled water and injected onto the HPLC column. The brain was homogenised in 2 ml saline, protein precipitated by addition of 2 ml of ice-cold acetonitrile. Samples were then centrifuged (1300 rpm; 2006g) and the supernatant added to 1 ml acetonitrile before being centrifuged again (1300 rpm; 2006g). The acetonitrile was evaporated under a stream of argon gas and the remaining 1 ml injected onto the HPLC column. Analysis of the brain and plasma metabolite samples was performed using the analytical HPLC methodology described. The % parent compound present was calculated.
Statistical analysis
All data are presented as mean ¡SEM. Data obtained at 30 and 120 min in conscious and anaesthetised groups were compared using two-way ANOVA with time and anaesthesia as variables with a single rat as the unit of analysis. In vitro binding to brain homogenates Binding of [ 125 I]WYE-230949 to rat brain homogenates was saturable (Fig. 2) and displaced by 10 mM iodophenpropit. Data was best fitted with a one-site binding model yielding calculated values for K d of 6.9¡1.3 nM and B max of 508.6¡34.9 fmol/mg protein (n58). Non-specific binding was linear and represented 39¡3% of total binding at 6.6¡0.1 nM (n53).
Ex vivo biodistribution and autoradiography
Ex vivo experiments were performed in both anaesthetised and conscious rats to examine if the low brain uptake of [ 125 I]WYE-230949 detected by SPECT imaging could have been due to anaesthesia. Brain uptake was low in both conscious and anaesthetised rats, the maximal amount measured being 0.11%ID/g of tissue at 30 min in anaesthetised rats. The brain uptake in anaesthetised rats was significantly higher than in conscious rats at both time points, 0.11¡0.01%ID/g vs. 0.06¡0.01%ID/g at 30 min and 0.06¡0.01 vs. 0.03¡0.003%ID/g at 120 min (Fig. 4). In other organs minimal differences between conscious and anaesthetised rats were observed. Overall uptake of [ 125 I]WYE-230949 was less at 120 min than 30 min in all organs. The peak radioactivity measured in plasma was 4.04¡1.14%ID/g of tissue at 0.38¡0.02 min (n56) and in whole blood was 3.14¡0.26%ID/g of tissue at 0.50¡0.03 min (n56). The radioactive concentration in plasma and whole blood reached a plateau of approximately 0.3%ID/g of tissue after 2 min (Fig. 5).
In areas with high histamine H 3 receptor densities, autoradiography demonstrated that there was higher specific binding at 120 min than at 30 min; in the caudate (p50.03), substantia nigra (p50.002), core (p50.0002) and shell of the nucleus accumbens (p50.006), posterior cortex (p50.02) and amgydala (Table 1). However, the ratio of specific to non-specific binding was low (less than 1.4) in all brain ROIs examined both in conscious and anaesthetised rats (Table 1). In addition, in all cases of ex vivo autoradiography after administration of [ 125 I]WYE-230949 the pattern of [ 125 I]WYE-230949 distribution was more homogeneous compared with the heterogeneous pattern of radioligand binding obtained from in vitro autoradiography (Fig. 6).
Ex vivo analysis of rat plasma and brain samples HPLC analysis of rat plasma and brain was performed in order to determine whether low brain uptake was due to the metabolism of [ 125 I]WYE-230949 in vivo. At both time points and in conscious and anaesthetised rats two metabolites were detected in plasma, one polar metabolite (Peak A, RT52.58 min) and one metabolite of intermediate lipophilicity (Peak B, RT53.84 min). Both of the metabolites were less lipophilic than [ 125 I]WYE-230949 (Peak C, RT57.89 min) (Fig. 7). The amount of parent compound remaining in the plasma was greater at 30 min than at 120 min (p50.009; Table 2) and the ratio of polar to intermediate metabolites increased with time both in conscious and anaesthetised rats.
In both conscious and anaesthetised rats two metabolites were also detected in brain tissue: one polar and one intermediate metabolite, both of which were less lipophilic than [ 125 I]WYE-230949. There was more parent compound present in the brains of conscious compared to anaesthetised rats at both time points (p50.035). The amount of parent compound remaining in the brain was similar at 30 min and 120 min after administration of [ 125 I]WYE-230949 in anaesthetised and conscious rats ( Table 2). The ratio of polar to intermediate metabolites Table 1 with higher binding in the substantia nigra, nucleus accumbens, choroid plexus and pineal gland. increased with time, and this was greater in conscious compared to anaesthetised rats.
Discussion
The aim of these studies was to evaluate [ 125 I]WYE-230949 and assess its potential as a novel radionuclide imaging agent for histamine H 3 receptors, having the advantage of a longer radioactive half-life that could be exploited more widely than the currently used C11-labelled ligands. WYE-230949 was successfully radiolabeled by electrophilic iododestannylation with good properties for imaging such as high specific activity (,80 GBq/mmol), radiochemical yield (average 50%) and purity (.99%). Initial in vitro characterisation suggested [ 125 I]WYE-230949 had promise as a potential radionuclide imaging agent for the histamine H 3 receptor. Using standard methodology the experimentally determined logP and logD values for [ 125 I]WYE-230949 were 1.59 and 1.64 respectively, which are comparable with other radiotracers that readily enter the brain such as [ 11 C]raclopride (LogP 1.2) and [ 11 C]GSK189254 (LogD of 1.7) [32,36]. Optimal logD and logP values for blood brain barrier (BBB) penetration of radiotracers should lie between 1.0 and 3.5, with lower values favouring specific binding and higher values improving BBB diffusion [37,38]. With a low molecular weight of 468.52 g/mol, no hydrogen bond donors and 5 nitrogen and oxygen atoms, [ 125 I]WYE-230949 has characteristics that are predictive of good BBB penetration [39].
[ 125 I]WYE-230949 has high affinity (K d of 6.9 nM) for the rat histamine H 3 receptor in whole rat brain homogenates. A B max value of 509 fmol/mg protein in whole rat brain is higher than previously reported values of rat histamine H 3 receptor densities. For example, the B max determined using [ 123 I]iodoproxyfan in rat striatum was 78 fmol/mg of protein [40], the B max of [ 125 I]iodophenpropit in rat cortex was 268 fmol/mg of protein [41] and [ 3 H]GSK189254 in rat cortex was 283 fmol/mg of protein [36]. It would be reasonable to expect that the B max in whole brain would be lower that the B max in striatum due to the presence of low density regions in the whole brain homogenate. It is not clear why our values are higher. Our data suggests that a secondary binding site is unlikely as the data were best fit with a one site binding model and the Scatchard plot was linear except for an expected increased variability at very low bound values (Fig. 2).
In vitro autoradiography showed the binding of [ 125 I]WYE-230949 was heterogeneously distributed in rat brain and corresponded with known histamine H 3 receptor distribution, specifically, in cortex, striatum, nucleus accumbens and substantia nigra [42,43]. Non-specific binding sites as defined by the presence of a high concentration of iodophenpropit was homogeneous, indicating selectivity and specificity of [ 125 I]WYE-230949 binding to histamine H 3 receptors in rat brain in vitro.
However, subsequent in vivo imaging studies were not supportive of [ 125 I]WYE-230949 as a potential radionuclide imaging agent for the histamine H 3 receptor. Following intravenous administration in rats, brain uptake of [ 125 I]WYE-230949 was low, maximally measured as 0.14%ID/g during the first 12 min post injection by SPECT and distinct areas corresponding to the known distribution of H 3 receptors could not be reliably identified on the SPECT images. Brain uptake was considerably less than other organs and compares poorly with other radiotracers for the histamine H 3 receptor. Uptake of [ 125 I]WYE-230949, like many PET and SPECT histamine H 3 radiotracers, was much higher in the lung, kidney and liver, the later probably reflecting hepatic metabolism and renal excretion [17,20,21]. [ 125 I]WYE-230949 clearance from plasma and whole blood was rapid reaching a concentration of 0.2 -0.3%ID/g from 2 min onwards, suggesting low plasma binding. By comparison, brain uptake of [ 11 C]JNJ-10181457, was 1.38%ID/g in rats at 5 min post injection [20] while that of [ 11 C]GSK189254A was 9.0%ID/L a 20 min post injection in the porcine brain [36]. Other radioiodinated histamine H 3 receptor tracers have also shown higher brain uptake, such as [ 123 I]GR 190028, [ 123 I]FUB271 and [ 123 I]iodoproxyfan with peak brain uptake values of about 0.6%ID/g, 1.2%ID/g and 1.5%ID/g, respectively [16].
The low brain uptake of [ 125 I]WYE-230949 could be due to a number of factors. One factor could be the rapid metabolism of [ 125 I]WYE-230949 in the body. Indeed, there was rapid [ 125 I]WYE-230949 metabolism in the brain and plasma with less than 38% of parent compound remaining after 30 minutes in plasma. This compares with 58% parent [ 11 C]GSK189254 and 43% parent [ 18 F]fluoroproxyfan remaining in plasma at 30 minutes post injection [21,36]. Metabolism of [ 125 I]WYE-230949 proceeded in the plasma with less than 8% remaining after 120 min, compared with 55% [ 11 C]GSK189254 remaining after 90 min. A highly hydrophilic species with a short retention time was detected by HPLC in tissue extracts suggesting deiodination of [ 125 I]WYE-230949 in vivo (Fig. 7). It is unlikely that these highly hydrophilic species would cross the BBB. Therefore the presence of hydrophilic species in the brain could be explained by metabolism occurring in the brain itself, although contamination of brain samples by metabolites in the cerebral vessels cannot be excluded. By comparison, greater than 80% parent [ 11 C]JNJ-10181457 remained in the brain at 30 min post injection, and 94% and 68% parent [ 18 F]fluoroproxyfan remained at 30 and 120 min respectively [20,21].
In addition to the rapid metabolism of [ 125 I]WYE-230949, ex vivo autoradiography revealed the regions with highest uptake of [ 125 I]WYE-230949 to be the choroid plexus and, when present in sections, the pineal gland. A similar observation was noted after systemic administration of [ 125 I]iodophenpropit [17]. In the pineal gland this may be due to [ 125 I]WYE-230949 passing through the incomplete BBB, and in the choroid plexus this could reflect deiodination of [ 125 I]WYE-230949 resulting in uptake of free radioactive iodide via the sodiumiodide symporter [44]. The thyroid also had high uptake on static whole brain SPECT images, again suggesting [ 125 I]WYE-230949 was deiodinating in vivo. The absolute amount of deiodination in the choroid plexus could not be determined by autoradiography as 125 I standards were not available and uptake of free radioactive iodine in the thyroid was assessed qualitatively in a static SPECT whole brain scan (S1 Fig.).
Other factors that could affect brain uptake include active transport mechanisms such as P-glycoprotein (P-gp) mediated transport and plasma protein binding, BBB penetration is a complex process making prediction of in vivo brain uptake from in vitro assays challenging. Recent studies have highlighted that measurement of lipophilicity should not be relied upon as a predictor of BBB penetration [38,45].
It should be noted that higher specific binding was present at 120 min compared with 30 min after [ 125 I]WYE-230949 administration in the caudate, amygdala, core and shell of the nucleus accumbens, substantia nigra and posterior cortex, all regions previously shown to have high histamine H 3 receptor densities. The increase in specific binding over time indicates that some of the injected [ 125 I]WYE-230949 was binding to histamine H 3 receptors in these regions [42,43], but not in sufficient quantities to permit SPECT imaging. Radioligand brain uptake, distribution and metabolite studies were performed in conscious as well as anaesthetised rats to rule out the possibility that anaesthesia may have confounded the images obtained from microSPECT imaging of [ 125 I]WYE-230949. Brain uptake was lower in conscious rats (almost half that of anaesthetised rats; Fig. 4) and therefore anaesthesia could not explain the low brain uptake of [ 125 I]WYE-230949. However, there was greater specific binding in the caudate, core of the nucleus accumbens, anterior, medial and posterior cortices and the choroid plexus in conscious rats, which may be explained by less metabolism of [ 125 I]WYE-230949 compared to anaesthetised rats at 30 and 120 min post injection. Isoflurane has unpredictable effects on drug metabolism and radiotracer binding, for example it accelerates the rate of cytochrome P-450 reduction by NADPH inhibition of aminopyrine N-demethylation, it activates aniline hydroxylation, increases binding of [ 3 H]-(S)-citalopram to serotonin transporters and decreases [ 125 I]PE2I binding to dopamine transporters [46,47,48]. Additionally, mean and local cerebral blood flow is dose dependently increased during isoflurane anaesthesia in most brain regions [49,50]. Taken together these results reinforce previous work demonstrating that general anaesthesia during small animal imaging can have important effects on uptake, metabolism and binding of the radiotracer under investigation. Increasingly in radiotracer development, in vivo imaging is being performed in anaesthetised animals without any evaluation in conscious animals. Since anaesthesia has the potential to confound in vivo imaging studies, our data supports performing validation of potential radiotracers under both conditions if possible.
In conclusion, [ 125 I]WYE-230949 is not a useful radiotracer for imaging rat histamine H 3 receptors in vivo due to low brain uptake, in vivo metabolism of the parent compound and low specific binding.
|
v3-fos-license
|
2017-06-16T14:43:29.471Z
|
2011-05-26T00:00:00.000
|
5828828
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcneurol.biomedcentral.com/track/pdf/10.1186/1471-2377-11-59",
"pdf_hash": "34926bc61e3f646e115f5ffe1556484253b6c022",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46401",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "34926bc61e3f646e115f5ffe1556484253b6c022",
"year": 2011
}
|
pes2o/s2orc
|
Angiopoietin-1 is associated with cerebral vasospasm and delayed cerebral ischemia in subarachnoid hemorrhage
Background Angiopoietin-1 (Ang-1) and -2 (Ang-2) are keyplayers in the regulation of endothelial homeostasis and vascular proliferation. Angiopoietins may play an important role in the pathophysiology of cerebral vasospasm (CVS). Ang-1 and Ang-2 have not been investigated in this regard so far. Methods 20 patients with subarachnoid hemorrhage (SAH) and 20 healthy controls (HC) were included in this prospective study. Blood samples were collected from days 1 to 7 and every other day thereafter. Ang-1 and Ang-2 were measured in serum samples using commercially available enzyme-linked immunosorbent assay. Transcranial Doppler sonography was performed to monitor the occurrence of cerebral vasospasm. Results SAH patients showed a significant drop of Ang-1 levels on day 2 and 3 post SAH compared to baseline and HC. Patients, who developed Doppler sonographic CVS, showed significantly lower levels of Ang-1 with a sustained decrease in contrast to patients without Doppler sonographic CVS, whose Ang-1 levels recovered in the later course of the disease. In patients developing cerebral ischemia attributable to vasospasm significantly lower Ang-1 levels have already been observed on the day of admission. Differences of Ang-2 between SAH patients and HC or patients with and without Doppler sonographic CVS were not statistically significant. Conclusions Ang-1, but not Ang-2, is significantly altered in patients suffering from SAH and especially in those experiencing CVS and cerebral ischemia. The loss of vascular integrity, regulated by Ang-1, might be in part responsible for the development of cerebral vasospasm and subsequent cerebral ischemia.
Background
Subarachnoid hemorrhage (SAH) accounts for 2-5% of all new strokes and is still associated with high morbidity and mortality [1,2]. In about 85% of all patients, non-traumatic SAH is caused by the rupture of an intracranial aneurysm [3]. Cerebral vasospasm (CVS) is one of the most important complications of SAH and may be associated with delayed cerebral ischemia (DCI) frequently resulting in poor functional outcome and death [4][5][6]. Various mechanisms are discussed to be involved in the pathophysiology of CVS. Apart from smooth muscle contraction and an increase of spasmogens such as oxyhemoglobin or bilirubin oxidation products an imbalance of endothelium-derived vasoconstrictor and vasodilator substances is thought to play a crucial role in CVS pathogenesis [7,8].
High serum levels of Ang-2 together with a decrease of the protective factor Ang-1 are associated with poor outcome and death in acute lung injury, severe sepsis, cerebral malaria and various other diseases [18][19][20][21][22][23][24]. In a recent publication by our group, we showed that endothelial microparticles are elevated in patients with CVS and DCI indicating an important role of the endothelium in CVS pathophysiology [25]. The current study investigates other factors involved in vascular homeostasis.
The primary hypothesis was that the angiopoietin system is altered in patients developing severe vasospasm and radiographic infarcts after SAH. Therefore, Ang-1 and Ang-2 serum concentrations were longitudinally measured in SAH patients monitored for the occurrence of CVS and DCI.
Study Population
Between November 2007 and January 2009 twenty consecutive patients with aneurysmal SAH admitted to the neurocritical care unit of the Department of Neurology of Innsbruck Medical University were enrolled in this prospective study. All patients were treated by endovascular coiling with electrolytically detachable platinum coils, six patients (30%) received additional vascular stents. The study protocol was approved by the Ethics Committee of Innsbruck Medical University (Reference Number UN3021, 256/4.17). Inclusion criteria: SAH confirmed by cerebral computed tomography (CT), ruptured intracranial aneurysm demonstrated by digital subtraction angiography (DSA) for which interventional coiling was possible, first signs and symptoms having occurred within 48 hours before screening, written informed consent before recruitment or at time of regaining consciousness and WFNS grades I-V. Exclusion criteria: intracerebral or intraventricular blood without aneurysmal bleeding source, moderate to severe vasospasm at screening angiography, known coagulopathies, treatment with thrombocyte aggregation inhibitors or vitamin-K antagonists and severe pre-existing concomitant diseases.
Twenty age and gender matched healthy volunteers were recruited from hospital workers and relatives of the study investigators (mean age: 52.2, range: 33-68). All data was analyzed on an intention-to-treat basis.
Sample collection and measurement
Blood samples of SAH patients were prospectively collected daily for the first 7 days, then every other day until 15 days post SAH. The first sample was taken before DSA was performed. Single blood samples from 20 age and gender matched volunteer donors served as healthy controls. Blood was collected using Sarstedt Monovette serum tubes. After at least 30 minutes of clotting time serum was obtained by centrifugation at 1500 rcf for 15 min within two hours after blood collection and stored at -80°C until use. Ang-1 and Ang-2 were measured in serum samples using enzyme-linked immunosorbent assay (R&D Systems, Minneapolis, MN) according to the manufacturer's instructions.
Transcranial Doppler sonography (TCD) and patient management TCD was performed daily from day 1 to 7 and every other day thereafter. Recordings of the mean blood flow velocities (mBFV) were performed through the transtemporal ultrasound window using a 2-MHz handheld transducer probe (Compumedics DWL Multidop X4, Melbourne, Australia) when pCO 2 levels where within normal ranges. Doppler sonographic cerebral vasospasm (dCVS) was defined as mBFV of 120 cm/s or more in the middle cerebral artery [26]. DCI was defined as new infarct on CT scan that had not been detected on the admission or the immediate post-interventional scan, and that was classified as vasospasm related by the research team. Other potential causes of CT pathologies, (e.g. rebleeding, cerebral edema or ventriculitis) were excluded. CT scans were also performed at discharge and were assessed by an independent radiologist.
At the end of hospitalization and after 6 months outcome was evaluated by modified Rankin Scale (mRS) and the Glasgow Outcome Scale (GOS). Demographic, clinical and laboratory values were recorded prospectively throughout the study. Patients experiencing dCVS received hemodynamic augmentation involving a target central venous pressure of > 8 mm Hg according to local protocols, which have been published previously [2]. Hypertension was induced using norepinephrine or phenylephrine infusion and fluid to maintain a mean arterial blood pressure of ≥100 mmHg. All patients received nimodipine either per os or intravenously at a daily dose of 300 mg, unless hemodynamic instability or hypotension occurred.
Statistical methods
Angiopoietin levels were compared between the patient groups by Wilcoxon rank-sum test or Wilcoxon signedrank test, as appropriate. The false discovery rate (FDR) criterion was used for controlling the errors in multiple comparisons [27]. To test the association between cerebral vasospasm and levels of Ang-1 and Ang-2 for important covariates (age, sex, white blood cell count (WBC), C-reactive protein (CRP) and body temperature), generalized estimation equations (GEE) were calculated with day post SAH and presence of dCVS as factors. To avoid co-linearity five different models were calculated, one for each of the respective covariates. Ang-1 and Ang-2 values were transformed logarithmically for this approach. Data are presented as mean ± SEM unless otherwise stated. Calculations were done using the PASW 18 (SPSS Inc., Chicago, IL, USA). Graphs were drawn with GraphPad Prism 5.00 software (GraphPad Prism Software Inc., San Diego, CA, USA).
Patients' characteristics
Patients' age ranged from 31 to 66 years (mean 52.2 years), 4 patients were of male gender, 16 were female. One patient showed mild vasospasm during intervention on day one. Another ten patients developed dCVS between day 2 and 13 (1 patient on day 2, 4 patients on day 4, 3 patients on day 6, 1 patient on day 11 and 1 patient on day 13). Seven patients developed cerebral ischemia attributable to vasospasm (CIV). Demographic, clinical and laboratory characteristics of all patients are listed in table 1 and were compared based on the presence of dCVS. Baseline characteristics were comparable between both groups.
Time course of Ang-1 and Ang-2 serum levels
Ang-1 levels decreased significantly on day 2 and 3 compared to baseline (p < 0.05, figure 1a). Ang-1 levels on days 2 and 3 also differed significantly between SAH patients and healthy controls (p < 0.05). Compared to day 2, when Ang-1 levels reached lowest values, there was a significant increase starting on day 5 reaching levels comparable to healthy controls (p < 0.05 for days 5 and 7, p < 0.01 for days 9, 11, 13 and 15).
Ang-2 levels did not differ significantly between days (figure 1b) or between patients and healthy controls. There was a trend towards higher Ang-2 serum concentrations in SAH patients on day 1 compared to healthy controls (p = 0.063).
Ang-2 levels were significantly higher in patients with Fisher grade 4 compared to patients with Fisher grade 2 and 3 (p < 0.01, figure 2). Neither Ang-1 nor Ang-2 levels differed significantly between patients receiving additional vascular stents and patients without stents (data not shown).
Doppler sonographic cerebral vasospasm
To analyze the time course of angiopoietin levels and its association to the development of dCVS, multivariate generalized estimation equations were applied with day post SAH and presence of dCVS as factors including important covariates (age, sex, WBC, CRP and body temperature). These models showed a statistically highly significant effect of the interaction of both factors indicating different dynamics for Ang-1 in patients with or without dCVS, respectively (p < 0.001, figure 3). In contrast to patients without dCVS, in whom Ang-1 increased earlier starting from day 3, patients suffering from dCVS showed a delayed increase of Ang-1 serum concentrations.
For Ang-2 serum levels no significant association to dCVS was found (data not shown).
Cerebral ischemia attributable to vasospasm
Ang-1 serum levels on day 1 were significantly lower in patients who developed CIV (n = 7) compared to patients without CIV in the later course of SAH (n = 13) (Wilcoxon rank-sum test, p < 0.05). The GEE models could verify a time dependent difference of Ang-1 in patients with and without CIV showing a highly significant effect of the interaction of factors CIV and day post SAH, independent of the above mentioned covariates (p < 0.001; figure 4). For Ang-2 serum levels no significant association to CIV could be found (data not shown).
Discussion
This pilot study describes the time course of Ang-1 and Ang-2 serum levels in patients with aneurysmal SAH and their association with the development of dCVS and CIV. Our main findings were: 1) Ang-1 serum concentrations were significantly lower in SAH patients on days 2 and 3 compared to baseline levels, 2) serum levels of Ang-1 differed significantly between SAH patients and healthy controls, 3) patients with dCVS, and in particular patients with CIV secondary to dCVS, revealed a different time course of Ang-1 serum concentrations with delayed recovery of low Ang-1 levels observed in the early course of the disease.
The vascular endothelium modulates vascular tone through the release of various vasoactive substances regulating smooth muscle cell contraction [28]. A sensitive equilibrium between vasoconstricting and vasorelaxing substances is crucial for the maintenance of normal (blood) vessel diameter [28]. CVS is characterized by a prolonged and enhanced contraction of smooth muscle cells in the arterial vessel wall [28]. Amongst others it is caused by calcium-dependent vasoconstriction, upregulation of vasoconstrictors and decreased levels of vasorelaxing substances such as Endothelin-1 (ET-1) and nitric oxide (NO), respectively [28]. ET-1 is the most important endothelial factor mediating vasoconstriction and is up-regulated during CVS [29]. Interestingly, Ang-1 down-regulates the expression of ET-1 in vitro reducing ET-1 mRNA and protein levels [30]. Results from animal experiments show reduced ET-1 after injection of Ang-1 transfected fibroblast cells in the rat lung [30]. Moreover, it was shown that Ang-1 up-regulates the endothelial nitric oxide synthase, an important source of vasorelaxant NO. In our study, Patients with dCVS, and in particular with CIV, revealed a delayed increase of Ang-1 and showed lower values of Ang-1. It is tempting to speculate that a lack of Ang-1 contributes to outbalanced vasoconstrictive substances such as ET-1.
Another important feature of CVS is endothelial cell apoptosis [31,32]. Cerebral endothelial cell death has been reported after SAH in rats [33]. Apoptosis of endothelial cells has been suggested to expose smooth muscle cells within the vessel walls to damaging and vasoconstrictive substances within the blood flow [32]. The regulation of endothelial cell viability is a crucial function of angiopoietins with Ang-1 ensuring endothelial survival and Ang-2 inducing endothelial cell death [9,10]. We found decreased levels of Ang-1, an antiapoptotic factor on endothelial cells, in patients with dCVS. This could further support the importance of endothelial apoptosis in the pathogenesis of CVS after SAH.
Other markers for vascular injury are endothelial microparcticles, which have been recently found to be associated with dCVS and CIV by our study group [25]. Ang-1 has been shown to suppress the generation of endothelial microparticles in vitro [34]. Lower Ang-1 levels might explain the increased levels of endothelial microparticles observed in patients with dCVS and CIV.
Data from various experimental and clinical studies suggest that Ang-1 is protective in cerebral ischemia. In the acute phase after ischemic stroke, Ang-1 is regarded as a protective factor on the vascular endothelium with important functions regarding blood-brain barrier stability [35]. This is supported by the fact that reduced levels of Ang-1 after cerebral ischemia are associated with blood-brain barrier breakdown [36]. The application of COMP-Ang-1, a soluble Ang-1 variant, in rats induced reduction of infarct volume and neurological deficits [37]. Zhao and colleagues report a protective effect of Ang-1 in a rat model of cerebral ischemia [36]. We found decreased levels of Ang-1 in patients with CIV. This further supports a possible protective role of Ang-1 in ischemic brain damage. Importantly, Ang-1 levels in patients with dCVS differed starting on day 3, whereas patients with CIV revealed different Ang-1 levels from the very first day. Pathologic alterations, such as acute CVS, cytotoxic edema and metabolic changes, have been described immediately after experimental and clinical SAH [38]. The current findings further corroborate the idea that mechanisms triggered by the initial bleeding are determining the predisposition for delayed cerebral infarction. Difference in baseline Ang-1 might reflect early impairment of vascular function in those patients, who develop symptomatic vasospasm later on. Surprisingly, we found significant alterations of Ang-1 associated with dCVS but not with Ang-2. Ang-1 is a product of pericytes, smooth muscle cells and fibroblasts, in contrast to Ang-2, which is mainly expressed by endothelial cells [9,10]. This might suggest a predominant role of perivascular cell types in the pathogenesis of CVS. However, further studies are required to evaluate the preponderance of endothelial or smooth muscle cell derived mechanisms respectively in the pathophysiology of cerebral vasospasm during SAH.
It should be noted that in the current study the diagnosis of CVS was based on TCD evaluations and not on digital subtraction angiography. Although the observed incidence of dCVS was within the known ranges [6] we might have missed some patients with CVS since the sensitivity of TCD in detecting angiographic CVS in not 100% [39,40]. However, analyzing patients with cerebral ischemia also revealed a significant change in the time course of Ang-1 serum concentration indicating that Ang-1 alterations occur in both dCVS and CIV. Though not necessarily associated with clinical symptoms/neurologic deficits, transient changes in cerebral vasculature, i.e. dCVS, seem to alter the release of Ang-1 from perivascular cells.
Our study was designed as a pilot study and therefore only included a small number of patients, which might be regarded as a limiting factor. Importantly, patients were well matched and showed a representative distribution of demographic and clinical characteristics. In addition the incidence of CVS, DCI and mortality was similar to previously local and international published data [6,41,42].
Conclusions
In summary, this is the first report of the temporal dynamics of Ang-1 and Ang-2 during the course of spontaneous subarachnoid hemorrhage. Ang-1 levels showed an initial decrease after ictus and a delayed return to baseline values in patients who developed dCVS in the course of the disease. In patients suffering from CIV, lower values of Ang-1 have already been observed on the day of admission. Ang-1 is likely to play an important role in SAH pathophysiology and in the development of CVS. Its exact function in this regard as well as potential therapeutic implications warrant further investigation.
|
v3-fos-license
|
2018-12-05T13:39:58.623Z
|
2015-02-07T00:00:00.000
|
54871145
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/lajss/v12n9/1679-7825-lajss-12-09-01677.pdf",
"pdf_hash": "a0c398e19ee099eaf441b6fccfde42263061d1c4",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46402",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "3fe6c34728788ac9db1a0f5dffbd8e810aac8c39",
"year": 2015
}
|
pes2o/s2orc
|
A New Displacement-based Approach to Calculate Stress Intensity Factors With the Boundary Element Method
The analysis of cracked brittle mechanical components considering linear elastic fracture mechanics is usually reduced to the evaluation of stress intensity factors (SIFs). The SIF calculation can be carried out experimentally, theoretically or numerically. Each methodology has its own advantages but the use of numerical methods has become very popular. Several schemes for numerical SIF calculations have been developed, the J-integral method being one of the most widely used because of its energy-like formulation. Additionally, some variations of the J-integral method, such as displacement-based methods, are also becoming popular due to their simplicity. In this work, a simple displacement-based scheme is proposed to calculate SIFs, and its performance is compared with contour integrals. These schemes are all implemented with the Boundary Element Method (BEM) in order to exploit its advantages in crack growth modelling. Some simple examples are solved with the BEM and the calculated SIF values are compared against available solutions, showing good agreement between the different schemes.
Latin American Journal of Solids and
The application of the BEM to fracture mechanics problems was initiated by Cruse through two works presented in 1970 and1971 (Cruse, 1996).These early works reported inaccurate SIF results (Aliabadi, 2002).Later, Cruse and Wilson (1977) implemented quarter-point elements to improve the accuracy of the BEM calculations, but the method had other difficulties for its application to crack problems.In general, these initial applications of the BEM to crack problems were limited by the fact that the two surfaces that form the crack were coplanar, generating a mathematical degeneration (Cruse, 1996).
In the early nineties, Portela et al. (1992) for two dimensions, and Mi and Aliabadi (1992) for 3D solids, proposed the Dual Boundary Element Method (DBEM) in which a displacement boundary integral equation (BIE) is applied on a surface of the crack and a traction BIE is applied on the other crack surface, thus avoiding the degeneracy in the Kelvin formulation found by Cruse (1996).From there, many works have been developed in the area, such as dell'Erba and Aliabadi (2001) who developed a DBEM methodology to solve 3D thermo-elasticity problems using the J-integral for evaluating the SIF.Dirgantara and Aliabadi (2002) used the DBEM to obtain mixed mode SIF values for cracked thin plates using crack surface displacement extrapolation and the J-integral technique.Purbolaksono et al. (2012) calculated the SIF in deformable plates using the DBEM and displacement extrapolation techniques.Wen and Aliabadi (2012) developed an algorithm to model smooth curve cracks using the DBEM.
Several alternative methods have been proposed for calculating SIF values at the crack tip using the FEM or the BEM as primary methods to solve the linear elasticity problem, such as: • Displacement Extrapolation (DE), which consists in extrapolating the numerical displacement field with the analytical solution to obtain the SIF.Cruse and Wilson (1978) used the DE with quarter-point elements, obtaining reasonable results.• Strain Energy Release, which calculates the strain energy of the deformed body or the external work done by loads for small crack advances in order to differentiate it and extract the SIF.This method was used by Cruse (1988) but was computationally expensive due to the small crack advance needed to achieve a reasonable accuracy.• J-integral, a path-independent integral proposed by Rice (1968), which is a contour integral that measures the strain energy flux across its boundary.This technique has been used to compute the SIF in many FEM and BEM works, including Rigby and Aliabadi (1998) who proposed a decomposition technique to extract the mixed mode SIF, Bezerra and Medeiros (2002) who proposed an alternative numerical scheme to implement the J-integral calculation, Ortiz and Cisilino (2006) who developed a J-integral based methodology for 3D cracks.• M-integral, also called interaction integral, a variant of the J-integral used by Walters et al. (2005) with the Galerkin BEM to obtain the SIF for 3D curved loaded cracks.• Energy Domain Integral, an approach used by Balderrama et al. (2006) that measures the total change of potential energy (including thermal strains) when the crack advances.• Crack Closure Integral, originally developed for the FEM, which is a stress-based approach to extract the SIF by measuring the force needed to close the crack.Singh et al. (1998) developed a formulation to be applied with the BEM, obtaining good results.
• The Least Squares method was used by Ju (1998) to extract the KIII SIF through a least square fit of the stress solution obtained from the FEM.• The Generalized Displacement Correlation method, recently developed by Fu et al. (2012) for the FEM, uses the displacement solution at crack surfaces for an explicit calculation of the mixed mode SIF.
In this work, a new displacement-based technique is proposed to calculate SIFs for different geometric configurations using the DBEM.Different schemes are proposed to be used with this new technique, and numerical results are compared with those obtained using the J-integral technique in order to compare their accuracy and computing performance.
THE BOUNDARY ELEMENT METHOD
The formulation of the BEM is based on Betti's reciprocal theorem where the following integral equation relating displacements u with tractions t for the boundary S is used (Becker, 1992), where ܥ = ߜ 2 ⁄ for smooth surfaces, with ߜ the Kronecker delta, ܶ and ܷ are traction and displacement kernels for the displacement integral equation, p is the collocation point and Q is a generic boundary point.The boundary geometry is discretized using quadratic elements (adopted in this work) and then Eq. ( 1) is written for each node, generating a square system of equations after known boundary conditions are applied.
However, for a cracked body, the crack geometry schematized in Figure 1, defined by S େ ା and S େ ି , has the same nodal coordinates if the crack faces are coplanar.This generates an ill-posed problem since Eq.(1) written for the S େ ା nodes is linearly dependent on the S େ ି equations.To overcome this issue, Portela et al. (1992) developed the DBEM for two-dimensional problems.The DBEM consists on applying Eq. ( 1) to the non-crack boundary S and one crack face ܵ ି , while the traction integral equation below is applied on the other crack face ܵ ା , This will result in a well-posed system of equations that can be solved to obtain displacement and tractions fields over the boundary.In Eq. ( 2), ܵ and ܦ are traction and displacement kernels for the traction integral equation.
The kernels for Eq. ( 1) and Eq. ( 2) are shown below (Aliabadi, 2002).The different singular behaviors of the integrands require special treatment to obtain meaningful and accurate results.
In this work, the DBEM implementation is done using isoparametric quadratic elements; regular integration is performed using Gauss quadrature, and the Cauchy principal value and the Hadamard finite-part regularization are used to evaluate the kernels with singular integrals.
STRESS INTENSITY FACTORS
The validity of the linear elastic fracture mechanics (LEFM) assumption resides in the small scale yielding hypothesis, meaning that plastic strains are only developed, at the crack tip, in a small region compared to the whole geometry, thus they can be neglected since their contribution to the global response is negligible.The stress and displacement fields are given by, where ܩ is the shear modulus, ݇ = 3 − 4ߥ for plane strain and ݇ = ሺ3 − ߥሻ ሺ1 + ߥሻ ⁄ for plane stress, in accordance with the crack geometry coordinate system shown in Figure 2 (Gross and Seelig, 2011).Now, the fracture mechanics problem with the LEFM approach is reduced to the determination of stress intensity factors.The easiest way to calculate the SIF is by obtaining stress values directly at the crack tip, but this task is unsuitable since numerical results near the crack tip are generally imprecise.
COMPUTATION TECHNIQUES FOR SIFS
In order to obtain stress intensity factors for cracked bodies, the classic approach begins by calculating the stress field near the crack tip and extrapolating the results to the crack tip with Eq. ( 7).This approach leads to numerical inaccuracies due to the singular behavior of the stress field near the crack tip, which is usually underestimated in the elastic solution of the problem by numerical methods including the BEM (Cruse, 1996).
The J-Integral
The J-integral is a contour integral that measures the strain energy flux across its boundary.Setting the integration contour far from the crack tip, the strain and stress fields can be accurately computed to evaluate the contour integral and obtain the SIF values.The J-Integral for plane problems is calculated as follows: Solids and Structures 12 (2015) 1677-1697 The relationship between J and the SIF for the LEFM approach is given by A circumferential integration contour centered at the crack tip is defined to implement the J-integral technique, as shown in Figure 3.The contour is discretized into quadratic elements (Bezerra and Medeiros, 2002) parameterized with an intrinsic variable (z).After some manipulations of Eq. ( 9), the integrand is rewritten in quadratic form as a function of the displacement gradient as, The first array in Eq. ( 11) corresponds to the constitutive matrix that relates stresses to deformations, where ߣ and ߤ are Lame´s parameters, while the second array contains the arrangement for the contour outward normal.The contour integration is parameterized by means of the Jacobian ܿܬ and the J-integral can be evaluated by the summation of the values at the Gauss points ሺ݊݃ሻ times their weights ݓ for each element of the contour ݊݁.
The gradient of the displacement field ݑ ,)ݑ∇( defined below, is written in vector form and can be decomposed into its symmetric and anti-symmetric parts using the method proposed by Rigby and Aliabadi (1998)
Displacement extrapolation
From the displacement field solution, the easiest way to obtain the SIF is by using the displacement extrapolation method (DE).This could be accomplished by taking the nearest crack tip opening displacement value obtained immediately after solving the problem and using Eq. ( 13) to retrieve the SIF directly (Aliabadi, 2002), This technique is very efficient because the numerical solution at crack nodes is immediately available from the BEM and no internal nodes need to be evaluated.The displacement field near the crack tip is well behaved and can be retrieved with good accuracy using the BEM even at the crack tip, but the DE results are very sensitive to displacement numerical errors.
Displacement fitting
The displacement field given in Eq. ( 8) is valid at any point near the crack tip because it only includes the leading term ݎ√ of the power series.However, from the complete crack tip solution, it is known that the next term in the power series is ,ݎ which must be included in the approximation in order to consider its contribution to the displacement field accompanied with a ߮ function, as shown in Eq. ( 14) below An internal or surface mesh with an arbitrary set of nodes, as shown in Figure 4, can be used to apply this methodology; the displacement field can be retrieved using Eq. ( 1) at each of the nodes from the BEM solution.The displacement field can be decomposed to decouple the SIF in Eq. ( 14), in order to take advantage of its symmetry properties (Aliabadi, 2002).The decomposition can be carried out using Eq. ( 15).For the given set of twelve internal nodes in Figure 4, the displacement field (modes I and II) is known from the BEM numerical solution.The position of these nodes in the crack tip coordinate system is also known.
Writing Eq. ( 14) in matrix form, separating unknowns from geometric parameters, the following equation is obtained The ݎ coefficients in Eq. ( 14) are considered as unknown.The numerical displacement field for each node is equaled to Eq. ( 14) considering the idea of the DE method.This leads to a system of equations with four unknowns ܭ( ூ , ܭ ூூ , ܿ ଵ , ܿ ଶ ), The system of equations ( 17) is solved through the least square method.Renaming the terms in Eq. ( 17) gives, The fitting solution is retrieved using the pseudo-inverse approach as the equation system is linear, as follows, Latin American Journal of Solids and Structures 12 (2015) 1677-1697 Thus, ܭ ூ and ܭ ூூ can be retrieved from a completely arbitrary node distribution, by fitting the numerical solution to the analytic field.This procedure is suitable to the BEM where the internal solution for the displacement field is easily calculated in a post-processing routine using Eq. ( 1), which is more efficient than evaluating the stress integral equation needed in the J-integral calculations.
General
In order to compare the performance and accuracy of the different schemes for calculating the SIF, six examples are solved and compared with their respective reference solutions, which can be found in Tada et al. (1985) for specimen cases, Shahani and Tabatabaei (2008) for the FPB specimen and in API 579-1/ASME FFS-1 ( 2007) for cylinder cases.The material properties are set to ܧ = ܽܲܩ002 and ߥ = 0.3 to solve the linear elasticity problem.
The geometries are generated and meshed using an automatic crack growth algorithm developed in MatLab.All geometries were modelled from a relation varying from a/t=0.1 until a/t=0.6.ܭ ூ values are calculated at each crack growth step using the following schemes: • J-integral: Evaluated using four symmetric elements and a contour radius of one element length to guarantee a straight crack inside the integrating contour.
• Displacement Fitting Technique (DFT).
The following schemes were used to fit the solution considering that this technique could be applied to any arbitrary set of nodes: o Surface nodes (S nodes): In the crack tip element, there are three boundary nodes because the element is quadratic as shown in Figure 5. Displacements are directly available from the BEM solution and the DFT can be used to calculate the SIF.o Contour nodes (J nodes): The nodes resulting from the discretization of the circular contour to evaluate the J-integral are taken as follows: a node belongs to the surface of the crevice and the remaining nodes are internal (see Figure 6).These nodes are evaluated using Eq. ( 1), obtaining another particular approach to apply the DFT.o Internal nodes (M nodes): A symmetric mesh with twelve internal nodes is used for calculating the SIF (Figure 7).Several internal nodes at different angles and radius are used.Six different geometries are evaluated to determine the performance of the numerical schemes used in this work.The values obtained are compared to the corresponding reference solutions.The comparison is carried out using the following expression:
Single Edge Notched Tension (SENT)
A SENT geometry is solved with dimensions ݐ = 1݉ and ܮ = 3݉ as shown in Figure 8 with an initial crack length ܽ = 0.1݉ and a crack growth advance of ∆ܽ ݐ ⁄ = 0.05.The boundary conditions and loads are also schematized with the BEM contour mesh.
Three and Four Point Bend (TPB and FPB)
As a second case, a TPB specimen is modelled using the dimensions ݐ = 1݉ and ܮ = 4݉ (keeping the relation ܮ ݐ ⁄ = 4).On the other hand, the FPB specimen is solved using the dimensions ݐ = 1݉, ܮ = 6݉ and ݀ = 1.5݉.An initial crack length ܽ = 0.1݉ is also used and a crack growth vance ∆ܽ ݐ ⁄ = 0.05 is established.The boundary conditions and loading are also schematized in the BEM contour meshes shown in Figure 9.
Compact Specimen (CS)
The CS geometry is modelled with the required dimensions for the experimental testing given by ASTM E647.However, some simplifications are carried out to model the CS, e.g. the loading pin holes are neglected and the tensile load is directly applied as shown in Figure 10.The geometry thickness is set to ݐ = 1݉, ܽ = ,ݐ2.0ܮ = ,ݐ2.1and the initial crack length ܽ = .ݐ2.0
Thick and Thin Walled Cylinders (CYL1 and CYL50)
Finally, a thick walled cylinder (ܴ ݐ ⁄ = 1) and a thin walled cylinder ሺܴ ݐ ⁄ = 50ሻ are modelled, both with an infinite long radial crack subjected to internal pressure (Figure 11).The boundary conditions are defined at symmetry planes and a pressure loaded crack is located at an angle of 45º measured from the ends.The thickness is set to ݐ = 1݉ and the initial crack length is ܽ = 0.1݉.
Single Edge Notched Tension (SENT)
Results obtained for the SENT geometry are shown in Figure 12.The ܭ ூ values calculated with the J-integral method and the DFT applied with the different schemes used (S-nodes, J-nodes and Mnodes) are compared with the respective reference solution.First, the general response of the ܭ ூ behavior is well retrieved by both approaches (the J-integral and DFT).On the other hand, comparing the numerical results versus the reference solution by means of Eq. ( 20), a numerical error lesser than 2% is obtained for all the proposed schemes except for the S-nodes approach for small cracks.This difference could be attributed to numerical errors associated to the closeness of the crack tip.Nevertheless, the other DFT results are in very good agreement with the reference solution.
Figure 13 shows the deformation pattern of the SENT geometry.
Three Point Bend (TPB)
Figure 14 shows the results for the TPB specimen.There, ܭ ூ has a sharper behavior compared with the SENT results, but all the schemes successfully capture this response.In this case, the J-integral method gives more accurate results compared with displacement based ones, although a difference lesser than 4% is found.The S-nodes scheme again produced the highest differences for small crack sizes (a/t 0.4).These differences are attributed to the model and the boundary closeness, leading to the conclusion that the S-nodes scheme is very sensitive to this error.The geometry deformation pattern is shown in Figure 15, where the symmetry of the solution and the crack opening mode could be appreciated.Figure 16 shows the results for the FPB specimen.In this example, the crack opening mode II is the most important, and ܭ ூூ values are also calculated and compared to the reference solution (Shahani and Tabatabaei, 2008).In this example, for small cracks, the J-integral method gives more accurate results compared with displacement based ones.For longer cracks, the displacement based methods show comparable precision to the J-Integral.In Fig. 17, a non-symmetric displacement behavior leading to a crack opening mode II is observed.
Compact Specimen (CS)
Figure 18 shows the results obtained for the CS geometry.The J-integral method has a higher difference (almost 7%) to the reference solution; this difference is due to the effect of the initial notch, which is not considered in the reference solution and is mitigated for longer cracks.In this example, the DFT results are more accurate than the J-integral results, with the exception of the S-node scheme.These results justify the simplification made to the original CS geometry.The compact specimen displacement solution is plotted in Figure 19.
Thick Walled Cylinder (CYL1)
The thick walled cylinder is a more complicated geometry because of its configuration and loading, and a finer mesh is needed to achieve convergence.Results for this geometry are shown in Figure 20, using the reference solution obtained from the API 579 standard for comparison.The pressure load acting at the inner cylinder face lead to a symmetric load with respect to the crack surface as can be seen in Figure 21.The crack growth mode is ܭ ூ since the crack grows straight for this load case.
Thin Walled Cylinder (CYL50)
Finally, a thin walled cylinder is solved.Higher values for ܭ ூ are found in comparison with the previous geometries, and both the J-integral and displacement-based methods give good results with differences of around 1% for long cracks, as can be seen in Figure 22.The deformation pattern of the thin walled cylinder is quite different from the thick one.Figure 23 shows a deformed ݐ/ܴ = 50 cylinder, the symmetry is well retrieved and the displacement variation through the thickness is negligible.
Computing time
The performance of the different schemes in terms of computing time and accuracy is analyzed in the context of the BEM.After solving the elasticity problem, boundary displacements and tractions are known and the solution at any internal point can be retrieved in a post-process routine by means of Eq. (1).The specific SIF calculations for each scheme are small compared to the calculation time of the internal points, and proportional to the number of evaluating points.Figure 24 shows a comparison between the schemes where the number of required evaluating points and internal points for each method are plotted.The faster scheme corresponds to S nodes because this scheme only uses information from boundary nodes, which are directly available from the BEM solution, followed by J nodes and M nodes, which require less internal points than the J integral.The time spent in the evaluation of internal points depends of many factors, but is independent of the SIF calculation method.
The accuracy of the schemes, however, is inversely proportional to the computation time.The most accurate scheme is the J-integral, closely followed by fitted M nodes and J nodes.S less accurate than the other schemes as can be seen in the different test specimens is shown.
It is noteworthy that the accuracy is proportional to the number of evaluating points.All schemes studied give good accuracy and can be used to estimate vantages and is suitable for implementation in the BEM as has been demonstrated.The proposed methodology is also applicable to FEM models.Computing load for the different schemes.
ause this scheme only uses information from boundary nodes, which are directly available from the BEM solution, followed by J nodes and M nodes, which require less internal points than the J integral.The time spent in the evaluation of internal points ends of many factors, but is independent of the SIF calculation method.
The accuracy of the schemes, however, is inversely proportional to the computation time.The integral, closely followed by fitted M nodes and J nodes.S nodes are less accurate than the other schemes as can be seen in Figure 25, where the mean relative error for It is noteworthy that the accuracy is proportional to the number of evaluating points.All studied give good accuracy and can be used to estimate ܭ ூ .Each method has different advantages and is suitable for implementation in the BEM as has been demonstrated.The proposed e error for the different schemes.
Evaluating points
Internal Nodes M nodes
% Error J nodes
Internal Mesh
Figure 24 :
Figure 24: Computing load for the different schemes
Figure 25 :
Figure 25: Mean relative error for the different schemes
|
v3-fos-license
|
2020-05-28T09:17:05.994Z
|
2020-05-23T00:00:00.000
|
219520821
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4701/10/5/686/pdf",
"pdf_hash": "d3f96a9ac65a0dc6741bbcdf547ec8b0addb5c76",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46405",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Medicine"
],
"sha1": "c31184272536218661d8bdcca6c802317cfe1916",
"year": 2020
}
|
pes2o/s2orc
|
Development of AM Technologies for Metals in the Sector of Medical Implants
Additive manufacturing (AM) processes have undergone significant progress in recent years, having been implemented in sectors as diverse as automotive, aerospace, electrical component manufacturing, etc. In the medical sector, different devices are printed, such as implants, surgical guides, scaffolds, tissue engineering, etc. Although nowadays some implants are made of plastics or ceramics, metals have been traditionally employed in their manufacture. However, metallic implants obtained by traditional methods such as machining have the drawbacks that they are manufactured in standard sizes, and that it is difficult to obtain porous structures that favor fixation of the prostheses by means of osseointegration. The present paper presents an overview of the use of AM technologies to manufacture metallic implants. First, the different technologies used for metals are presented, focusing on the main advantages and drawbacks of each one of them. Considered technologies are binder jetting (BJ), selective laser melting (SLM), electron beam melting (EBM), direct energy deposition (DED), and material extrusion by fused filament fabrication (FFF) with metal filled polymers. Then, different metals used in the medical sector are listed, and their properties are summarized, with the focus on Ti and CoCr alloys. They are divided into two groups, namely ferrous and non-ferrous alloys. Finally, the state-of-art about the manufacture of metallic implants with AM technologies is summarized. The present paper will help to explain the latest progress in the application of AM processes to the manufacture of implants.
Introduction
Nowadays, industry is undergoing the 4th industrial revolution, which involves a lot of different fields such as nanotechnology, Internet of Things (IoT), and Artificial Intelligence (AI), among others. It also includes additive manufacturing (AM), which is the technology that builds 3D objects from successive layers. In recent years, the use of 3D printed parts in the medical sector has increased. According to the Wohlers report [1], the medical sector was the third most important one in the US in 2014 ( Figure 1).
Additionally, the planet's population is increasing every year, along with life expectancy. It is therefore not difficult to imagine the different medical problems that can arise. For example, the appearance of new diseases or more people in need of surgery for organs transplantation as well as body parts' replacements such as knees or hips. Additionally, the planet's population is increasing every year, along with life expectancy. It is therefore not difficult to imagine the different medical problems that can arise. For example, the appearance of new diseases or more people in need of surgery for organs transplantation as well as body parts' replacements such as knees or hips.
According to the current technological development, it is possible to think about personalized medicine, for instance, the customization, design and fabrication of patient-particular products using AM technologies. Moreover, this technology could offer several significant advances: • Creation of ideal products. Polymers are common in different medical applications such as tissue engineering [2], and the manufacture of surgical planning prototypes [3][4][5][6][7] and scaffolds [8]. Ceramic materials have also been employed in scaffolds [9]. However, in recent years the use of metals has increased in medicine. They can be found in several applications, such as surgical guides [10], prostheses [11], implants [12], etc. (Figure 2). Polymers are common in different medical applications such as tissue engineering [2], and the manufacture of surgical planning prototypes [3][4][5][6][7] and scaffolds [8]. Ceramic materials have also been employed in scaffolds [9]. However, in recent years the use of metals has increased in medicine. They can be found in several applications, such as surgical guides [10], prostheses [11], implants [12], etc. (Figure 2).
Most metallic parts used in the medical sector have complex shapes, in many cases combined with porous structures that favor their fixation in the body by means of osseointegration. AM can provide these shapes without excessively increasing costs. In addition, it allows customized parts to be produced from the DICOM (Digital Imaging and Communications in Medicine) files obtained, for example, in radiology tests.
The present paper focuses on the recent advances in AM of metallic implants, which are tissues or prostheses that are placed inside or on the surface of the body. Prostheses are artificially made parts of the body that replace a part that is missing, either internal or external.
First, the main AM technologies for metals are explained: binder jetting (BJ), selective laser melting (SLM), electron beam melting (EBM), direct energy deposition (DED), and fused filament fabrication (FFF). Next, the main properties of the metals that are used to manufacture prostheses and implants are summarized. Then, the recent advances regarding applications of AM metallic implants are presented. Finally, the main conclusions of the paper are summarized.
Most metallic parts used in the medical sector have complex shapes, in many cases combined with porous structures that favor their fixation in the body by means of osseointegration. AM can provide these shapes without excessively increasing costs. In addition, it allows customized parts to be produced from the DICOM (Digital Imaging and Communications in Medicine) files obtained, for example, in radiology tests. [24][25][26].
Advantages Disadvantages
No need to design nor use supports Limited success in producing metallic parts Unused powder can be reused Worse mechanical properties than powder bed fusion processes Wide range of materials Low density Fast process Large build size Requirement of post processing (sintering/infiltration) Although this technique is mainly used for ceramic materials, for example to obtain sand molds and cores in the sand casting process [27], it is also employed for metal matrices [28]. For example, iron parts with enhanced strength are obtained by means of bronze infiltration [29]. However, the high density of metals makes them less stable than other materials, and the fine particles can be prone to oxidation [26]. Different materials such as titanium [30], stainless steel [20], CoCr alloys [31], or Inconel [21] have been manufactured with binder jetting. In addition, since this technology does not require the use of an energy beam for processing metals, it is a good choice for reflective and thermally conductive metals, which can be challenging to be processed by powder bed fusion technologies [32].
Powder Bed Fusion by Selective Laser Melting (SLM)
Powder bed fusion by selective laser melting (SLM) using metallic powder is an AM process that is similar to selective laser sintering (SLS). Unlike SLS, in which the particles are only sintered, the SLM technique uses a high power-density laser in order to melt and fuse the metallic powders together. Other common names for this powder bed fusion technology for metals are DMLS (direct metal laser sintering) and DMP (direct metal printing). The process starts by the spreading of a thin layer of the metal powder. Then, a high power-density laser is used in order to melt and fuse the metallic powders together. After this, the build platform is lowered in the Z direction and the process is repeated again until the 3D printed required part is built. SLM takes place in a chamber with inert gas atmosphere, using, for instance, argon or nitrogen.
Two examples of machines using this technology are the SLM 125 and SLM 280 2.0 of SLM Solutions Group AG, Lübeck, Germany [33]. On the one hand, SLM 125 has a build volume of 125 × 125 × 125 mm 3 and a printing speed of 25 cm 3 /h. On the other hand, SLM 280 2.0 has a build volume of 280 × 280 × 365 mm 3 and a printing speed of 113 cm 3 /h. These latter values depend on the materials and part geometry.
The main advantages and disadvantages of SLM can be seen in Table 2. Although ideal materials are pure metals, different alloys can also be used in the SLM process, such as stainless steel, CoCr alloys, titanium alloys, and aluminum [41].
Powder Bed Fusion by Electron Beam Melting (EBM)
Powder bed fusion by electron beam melting (EBM) is a type of 3D printing process for metals in which the object material, normally in the form of powder, is manufactured by melting layer-by-layer with an electron beam at high temperature in a high vacuum atmosphere.
Firstly, a layer of metal powder is distributed onto the build platform, and melted by the electron beam. Then, the build platform is lowered, and another layer of metal powder is subsequently coated on top.
One example of a machine of this technology is the Arcam EBM Q20plus, from GE Additive, Boston, MA, USA [42]. It has an electron beam power of 3 kW and a build volume of 350 mm in diameter and 380 in heigtt. Additionally, EBM takes place in a vacuum and at high temperatures.
The main advantages and disadvantages of EBM can be seen in Table 3. Table 3. Advantages and disadvantages of EBM [41,43].
Advantages Disadvantages
Possibility of working at elevated temperatures High fatigue Better protection against contamination Danger for electrostatic charge of the powder Low level of residual stresses Absence of shrinkage, no thermal post-processing Freedom of design, because of fewer supports Allows stacking parts and obtaining meshes Only conductive alloys can be obtained Rough finish that requires polishing (depending on process conditions) Titanium alloys, CoCr alloys, stainless steel and Inconel are frequently employed in EBM processes [41].
Direct Energy Deposition (DED)
Direct energy deposition (DED) is an additive manufacturing process in which an energy source such as an electron, laser beam or electric arc, is aimed at the material (in the form of powder or wire), Metals 2020, 10, 686 6 of 30 in order to fuse the materials by melting while they are being deposited. Due to the use of four-or five-axis machines, the material can be deposited from any angle onto the existing surfaces of the object and then melted. The process requires a chamber with inert gas to control the material properties and to avoid oxidation of the material.
Two examples of machines using this technology are the INTEGREX i-400 AM and the DMD 503D/505D, from Yamazaki Mazak Corporation, Oguchi, Japan. On the one hand, INTEGREX i-400 AM is a five-axis multi-tasking machine and it is used for 3D printing of materials which are difficult to be machined [44]. On the other hand, DMD 503D/505D has a build volume of 1590 × 1400 × 1470 mm 3 and a position accuracy of 0.03 mm [45].
The advantages and disadvantages of DED can be seen in Table 4.
Material Extrusion by Fused Deposition Modelling (FDM) or Fused Filament Fabrication (FFF)
In the FFF or FDM technique, a filament is melted, extruded through a nozzle and subsequently deposited on a printing bed layer-by-layer. Once a layer it deposited, the build platform is lowered (in low-cost machines, the build platform does not lower after each layer is deposited). Finally, when the piece is completed, it is placed into a sintering furnace to remove the plastic and sinter the metal particle together. Both debinding and sintering processes are required after extrusion, which cause material shrinkage.
One example of a machine of this technology is the Metal X of Markforged, Watertown, MA, USA. It has a build volume of 300 × 220 × 180 mm 3 and a Z resolution of between 50 and 125 µm [50].
The main advantages and disadvantages of FFF can be seen in Table 5. Table 5. Advantages and disadvantages of FFF with high metallic content [51][52][53].
Advantages Disadvantages
Simple technology Low accuracy Wide range of materials Shear stress on nozzle tip wall Possibility to use low cost machines Bad resolution Reliable Poor mechanical properties, although enhanced with respect to polymers Thermal postprocess (associated with shrinkage) Although it is associated with plastic materials such as polylactic acid (PLA) or acrylonitrile butadiene styrene (ABS), the filament can be filled with a high percentage of metallic particles in order to print metallic parts. Some of the metals used are copper [54], stainless steel [55], and titanium [56].
Comparison of the AM Technology Process for Metals
Before any object is 3D printed several factors must be considered. In other words, depending on the objective of the product, the material used, etc., it might be better to use one technology or another. Therefore, it is important to compare different aspects of the AM technologies (Table 6). [26,37,[73][74][75] Regarding the manufacturing costs, the following values are the printing costs per unit [76]: (1) $ 2.50-4 for laser powder bed, (2) $ 1.33-3 for e-beam powder bead, (3) $ 0.33-1.5 for powder DED, (4) $ 0.25-0.6 for wire DED, and (5) $ 0.08-1.5 for binder jetting. Thus, among the studied processes, the most expensive one is laser powder bed (SLM) followed by e-beam powder bed (EBM).
Metals Used in the Medical Sector
Different metals and alloys are currently used in the medical sector. They can be divided into two categories: ferrous and non-ferrous ( Figure 3). Ideally, an alloy used for implants should be biocompatible and have good mechanical properties, i.e., high tensile, compressive and shear strength, high fatigue strength to prevent failure under cyclic loading, and low elastic modulus comparable to that of bone. They should also have high corrosion resistance, high wear resistance, and a low price. Another important factor to be considered is the possibility to obtain porous structures, because they influence both the mechanical strength and the biological properties of the tissues. On the other hand, the osseointegration of bones depends on both biomechanical interlocking and biological interactions, which are related to the surface roughness of the implants. These properties are in more depth in Section 4.10.
Metals
Among the ferrous alloys, the most frequently employed one to manufacture implants is stainless steel due to its high corrosion resistance. However, it has low fatigue strength and undergoes deformation. Consequently, it is mainly used for non-permanent implants. Regarding the non-ferrous materials, some of them are bio tolerant like CoCr alloys, gold, niobium and tantalum, while pure titanium and titanium alloys are bio inert [77]. Ion release is one of the main disadvantages of CoCr alloys, titanium, and titanium alloys, although they provide high corrosion resistance. In addition, differences in mechanical properties between the bone and the implant can lead to stress shielding problems, with either a loosening of implants or the growth of soft fibrous tissue [78]. Figure 4 provides a comparison of metals by focusing only on the material price. This does not include the manufacturing costs, and prices are related to the market price. According to their price, metals can be divided into three categories. The green area corresponds to the cheap ones, the yellow area to the medium ones, and the red area to the expensive ones, in this case tantalum, with a price above 110 €/kg. The manufacturing costs can be divided into two groups: fixed and recurring [62]. On the one hand, the fixed costs correspond to the manufacturing tools, dies, machines, etc. These costs are amortized over time and, therefore, the more the 3D printing machines are used, the lower the costs per printed piece are. On the other hand, the recurring costs include the material price and labor. Regarding the materials, it is important that they are easily 3D printed and have good physical and mechanical properties, but without excessive cost.
The following paragraphs present the main characteristics of the metals that are employed to manufacture metallic implants by means of AM processes.
Ferrous
A ferrous metal contains iron in its composition, as well as carbon. They can be divided into two categories, namely (a) alloys such as stainless steel and (b) iron.
Stainless Steel
It is an iron alloy containing at least 10.5% of chromium and 1.2% of carbon. Chromium offers the stainless steel the benefit of being resistant to corrosion thanks to the chromium oxide layer, unlike the regular steel. Additionally, stainless steel can have other materials but in lower proportions such as molybdenum or nickel [80]. Some examples of stainless steel are SAE 304, SAE 316 and SAE 316L Boron-titanium modified stainless steel, defined by the Society of Automotive Engineers (SAE). SAE 316 L has been used in recent years for biomedical applications. Sintering of the material in a nitrogen atmosphere helps to retain the nickel ions in the stainless steel [81], which would otherwise be released from implants due to local corrosion [82]. Additionally, it is necessary to carry out cell culture studies, such as cytotoxicity assays or cells imaging [83], to verify its biocompatibility. As for SLM manufactured stainless steel, biocompatibility increases when the material is coated with hydroxyapatite [84].
The main advantages and disadvantages of stainless steel can be seen in Table 7. The manufacturing costs can be divided into two groups: fixed and recurring [62]. On the one hand, the fixed costs correspond to the manufacturing tools, dies, machines, etc. These costs are amortized over time and, therefore, the more the 3D printing machines are used, the lower the costs per printed piece are. On the other hand, the recurring costs include the material price and labor. Regarding the materials, it is important that they are easily 3D printed and have good physical and mechanical properties, but without excessive cost.
The following paragraphs present the main characteristics of the metals that are employed to manufacture metallic implants by means of AM processes.
Ferrous
A ferrous metal contains iron in its composition, as well as carbon. They can be divided into two categories, namely (a) alloys such as stainless steel and (b) iron.
Stainless Steel
It is an iron alloy containing at least 10.5% of chromium and 1.2% of carbon. Chromium offers the stainless steel the benefit of being resistant to corrosion thanks to the chromium oxide layer, unlike the regular steel. Additionally, stainless steel can have other materials but in lower proportions such as molybdenum or nickel [80]. Some examples of stainless steel are SAE 304, SAE 316 and SAE 316L Boron-titanium modified stainless steel, defined by the Society of Automotive Engineers (SAE). SAE 316 L has been used in recent years for biomedical applications. Sintering of the material in a nitrogen atmosphere helps to retain the nickel ions in the stainless steel [81], which would otherwise be released from implants due to local corrosion [82]. Additionally, it is necessary to carry out cell culture studies, such as cytotoxicity assays or cells imaging [83], to verify its biocompatibility. As for SLM manufactured stainless steel, biocompatibility increases when the material is coated with hydroxyapatite [84].
The main advantages and disadvantages of stainless steel can be seen in Table 7. Table 7. Advantages and disadvantages of stainless steel [80,85].
Advantages Disadvantages
High corrosion resistance Sometimes difficult to handle Heat resistance Release of chromium and nickel Biocompatible Prone to deformation Excellent mechanical properties Low fatigue strength when subjected to oxidation Easy fabrication Non-porous
Iron
It is only the most common element on Earth by mass, and also in the human body. It can be classified into three categories depending on the carbon content: wrought iron (less than 0.08% C), carbon steel (between 0.08 and 1.76% C) or cast iron (more than 1.76% C). Additionally, cast iron can be divided into smaller groups such as white, grey, malleable and nodular graphite.
The advantages and disadvantages of wrought iron can be seen in Table 8. Table 8. Advantages and disadvantages of wrought iron [86].
Advantages Disadvantages
Tough Cannot be hardened Excellent mechanical properties Sometimes difficult to handle Corrosion resistance High cost Excellent weldability Weldability is an important factor to be taken into consideration when a part needs to be joined to another of either a similar or dissimilar material [87], e.g., in implants. If cracks are easily avoided, the materials are 'weldable'.
Fe-Mn alloy has been used to produce bone scaffolds by SLM [88]. Mn is added to control the high degradation of Fe. Moreover, in another study, a Fe-HA (iron-hydroxyapatite) composite was manufactured using different particles sizes [89]. Not only were better corrosion rates obtained than for pure iron, but also with the addition of HA closer mechanical properties to those of bone are also obtained. For example, pure iron tensile strength is 215 MPa, for Fe + 2.5 wt% HA (1-10 µm) it is 117 MPa and the strength of the human femur bone is 135 MPa (longitudinal tension) [90].
The advantages and disadvantages of carbon steel can be seen in Table 9. Table 9. Advantages and disadvantages of carbon steel [86].
Advantages Disadvantages
Excellent mechanical properties Ductility decreases with carbon content Good weldability Susceptible to rust and corrosion Good formability Hard and tough Low stress concentration Resistant to oxidation The main advantages and disadvantages of cast iron can be seen in Table 10. Table 10. Advantages and disadvantages of cast iron [86].
Excellent mechanical properties High brittleness Biocompatible
Low machinability Cytocompatibility Good castability Low stress concentration Resistant to oxidation Machinability is measured by focusing in the machinability index. The value of 100 is the average, with carbon steel 1212 being the average. Therefore, a value lower than 100 means that the machinability is more difficult and, therefore, the production time is lower; while with a value higher than 100 it is easier. In terms of cast iron, it ranges from 36 to 78 [91].
Non-Ferrous
Non-ferrous metals do not contain iron in appreciable amounts and are generally more costly than ferrous metals due to their desirable properties. They can be divided into different categories: (a) alloys, (b) light metals, (c) rare metals, and (d) white.
CoCr Alloys
Cobalt-chromium alloys are alloys composed mainly of cobalt and chromium. They are used in aerospace engineering amongst other applications. However, taking into account their excellent properties, they have been used in dentistry for decades [85]. Most employed CoCr alloys in medical applications are Co-Cr-Mo, Co-Ni-Cr-Mo, and Co-Cr-W-Ni [92]. Using this metal alloy has a major drawback, which is the ion release, which could lead to adverse effects such as toxicity, metallic taste, mucosities, etc.
The advantages and disadvantages of the CoCr alloys can be seen in Table 11. Table 11. Advantages and disadvantages of the CoCr alloys [68,92].
Advantages Disadvantages
Excellent mechanical properties Wear and corrosion can lead to the release of metal ions Excellent corrosion resistance High cost Biocompatibility Limitations on component complexity
Nickel Alloys
Nickel (Ni) alloys are metals made from a combination of nickel as the primary element and another material such as Ni-Al alloy, Ni-Cr alloy or Ni-Ti alloy. Although nickel is very toxic, a titanium oxide layer is formed that prevents from nickel oxidation [92]. An example of Ni-Ti alloy is Nitinol, which contains approximately 50% of Ni and 50% of Ti. Nitinol is a shape memory alloy, which retains its original shape after severe deformations. It is used for hard tissue implants and in dentistry. In recent years, the behavior of SLM printed Ni-Ti alloys has been addressed [93,94].
The advantages and disadvantages of the nickel alloys can be seen in Table 12. The low thermal conductivity of nickel complicates its manufacture; for example, in machining or high temperature AM processes, because heat cannot be easily removed from the working area, thus increasing working temperatures. Titanium (Ti) is the ninth-most abundant chemical element in the Earth's crust and it can be combined with other elements in order to form the known titanium alloys. It is widely used in several applications such as dental implants, but during the last few years, it has assumed greater importance in the biomedical applications such as hip prostheses, especially due to its biocompatibility and high fracture resistance. These two parameters are important in prostheses for two reasons: (1) biocompatibility so that the host tissue does not reject the implant: (2) high fracture resistance so that the implant does not fracture. Commercial pure titanium (CP-Ti) has an excellent biocompatibility because of a stable oxide layer that forms spontaneously on its surface [96].
Regarding the use of SLM, Taniguchi et al. [97] investigated the bone ingrowth of different pore sizes of titanium implants manufactured by SLM. Finding the best titanium implant for osseointegration is essential, so that the implant integrates as quickly as possible with the bone.
The advantages and disadvantages of the titanium can be seen in Table 13. Titanium alloys contain titanium and other chemical elements. Different alloys are used in medical applications, such as α + β alloys, Ti-Al-Nb and β-Ti alloy [92]. However, the most typical example is Ti-6Al-4V which is an α-β titanium alloy containing a 6% aluminum and 4% vanadium. A specific alloy Ti6Al4V ELI (extra-low interstitial) alloy provides higher ductility and fracture toughness than the conventional alloy [100].
The three most developed techniques for additively manufacturing titanium alloy structures are direct energy deposition (DED), selective laser melting (SLM) and electron beam melting (EBM) [65,101]. For example, porous parts implants can be manufactured using SLM technology and they can achieve a mimicking of the human bone at a 60% of porosity [102].
The advantages and disadvantages of the titanium alloys can be seen in Table 14. Other titanium alloys, such as Ti-6Al-7Nb [104,105], have been used for printing implants as well as others such as Ti-24Nb-4Zr-8Sn [106] or Ti-33Nb-4Sn [107]. A typical niobium alloy used for prostheses is Ti-42Nb.
Magnesium
Magnesium (Mg) is a light material with a relatively high mechanical strength that can replace aluminum in some applications. However, its accelerated corrosion rate in physiological environments reduces its potential use in implants [108]. Nevertheless, magnesium-based biodegradable materials are promising candidates, making a second surgery for implant removal unnecessary [109,110]. Magnesium powder is flammable and should be handled with care [111].
The main advantages and disadvantages of magnesium can be seen in Table 15. Magnesium scaffolds have been prepared with the purpose of bone regeneration [113].
Tantalum
Tantalum (Ta) is a very chemically resistant metal and, consequently, it is widely used in biomedical applications. Additionally, it is inert to practically all the organic and inorganic compounds. Tantalum has been printed with the SLM technique [114].
The main advantages and disadvantages of the tantalum alloys can be seen in Table 16. Zinc (Zn) is one of the most indispensable trace elements in the human body and it is often employed in industry for the surface treatment of steel, for example in galvanization or electroplating processes. In medical applications, it has been used in cardiovascular stents and dental implants, amongst others.
The strength of zinc can be improved by alloying with elements such as Mg, Ca, Sr, Li, and Cu [117]. On the other hand, inorganic Zn compounds such as Zn-hydroxyapatite [118] or ZnO [119] can be used to manufacture implants.
The main advantages and disadvantages of zinc can be seen in Table 17.
Other Metals and Alloys
There are other metals that are not as commonly used in the medical sector, except in a few biomedical applications. For instance, copper (Cu) is the third most important trace element in the human body. Some commercially available copper alloys are Cu-Al-Ni and Cu-Al-Mn [92]. However, it has been proved that it is both difficult and expensive to print copper [120].
Pure Tungsten (W) powder has also been used in SLM processes [121], as well as pure Niobium (Nb) [122]. Neodymium (Nd) has been added to the Mg-5Zn-0.35Zr-0.13Y, improving the mechanical strength and corrosion resistance of the alloy [108].
Although aluminum is not a suitable material for implants because of its easy oxidation, it is found in many titanium alloys like Ti-6Al-4V [85].
Gold alloys were used in the past for dental implants [92]. However, they are not commonly used nowadays because of their high price.
Comparison of the Metals
A comparison of the different metals was made, regarding their most important properties (Table 18).
Applications
Metals have been widely used in different applications in recent last years. Not only can they be used in the automotive or aeronautical sectors, but also in the medical field. Within medicine, they can be employed for several purposes: scaffolds, implants, surgical guides, fixation guides, etc. The main applications of 3D printed metals in the manufacture of implants are presented in the following subsections.
Nowadays, many implants such as the hip or knee prostheses are manufactured in metallic materials. This is due to their high mechanical and fatigue strength and the easiness to manufacture them with conventional machining processes. Some authors have attempted to print prostheses by means of AM technologies. Unlike other manufacturing processes such as casting, AM technologies allow customized prostheses to be manufactured in serial batches without incurring excessive costs.
Several kinds of implants are available: cranial, maxillofacial, spinal, hip, knee, or skeletal reconstruction implants [12] among others. The ISO 5832 standard summarizes the characteristics, as well as the test methods, of the material to be used in metallic implants. For example, ISO 5832-1 [137] corresponds to wrought stainless steel, ISO 5832-3 to wrought titanium 6-aluminium 4-vanadium alloy [138], and ISO 5832-4 [139] corresponds to cobalt-chromium-molybdenum alloys.
The following subsections present the recent advances in metallic implants manufactured by AM methods, for cranial implants, maxillofacial implants, spinal implants, upper & lower limb implants, and dental implants.
Applications
Metals have been widely used in different applications in recent last years. Not only can they be used in the automotive or aeronautical sectors, but also in the medical field. Within medicine, they can be employed for several purposes: scaffolds, implants, surgical guides, fixation guides, etc. The main applications of 3D printed metals in the manufacture of implants are presented in the following subsections.
Nowadays, many implants such as the hip or knee prostheses are manufactured in metallic materials. This is due to their high mechanical and fatigue strength and the easiness to manufacture them with conventional machining processes. Some authors have attempted to print prostheses by means of AM technologies. Unlike other manufacturing processes such as casting, AM technologies allow customized prostheses to be manufactured in serial batches without incurring excessive costs.
Several kinds of implants are available: cranial, maxillofacial, spinal, hip, knee, or skeletal reconstruction implants [12] among others. The ISO 5832 standard summarizes the characteristics, as well as the test methods, of the material to be used in metallic implants. For example, ISO 5832-1 [137] corresponds to wrought stainless steel, ISO 5832-3 to wrought titanium 6-aluminium 4-vanadium alloy [138], and ISO 5832-4 [139] corresponds to cobalt-chromium-molybdenum alloys.
The following subsections present the recent advances in metallic implants manufactured by AM methods, for cranial implants, maxillofacial implants, spinal implants, upper & lower limb implants, and dental implants. Jardini et al. [12] manufactured cranial implants with direct metal laser sintering (DMLS, same technology as SLM)). The Ti-6Al-4V alloy was used (Figure 2a).
Maxillofacial Implants
Suska et al. [142] used EBM of Ti-6Al-4V alloy to manufacture a jaw prosthesis, which was individually designed and implanted, with a good aesthetic outcome ( Figure 6). They added diamond-like porous structures to the upper and lower parts of the implant to favor the fixation of the prosthesis by means of osseointegration. The strut size employed was 0.3 mm and the pore size was 0.8 mm. Yan et al. [143] employed Ti-6Al-4V titanium alloy to manufacture a mandibular prosthesis with a 3D mesh by means of EBM. The mesh porosity was 81.38% and the strut size, Jardini et al. [12] manufactured cranial implants with direct metal laser sintering (DMLS, same technology as SLM)). The Ti-6Al-4V alloy was used (Figure 2a).
Maxillofacial Implants
Suska et al. [142] used EBM of Ti-6Al-4V alloy to manufacture a jaw prosthesis, which was individually designed and implanted, with a good aesthetic outcome ( Figure 6). They added diamond-like porous structures to the upper and lower parts of the implant to favor the fixation of the prosthesis by means of osseointegration. The strut size employed was 0.3 mm and the pore size was 0.8 mm. Yan et al. [143] employed Ti-6Al-4V titanium alloy to manufacture a mandibular prosthesis with a 3D mesh by means of EBM. The mesh porosity was 81.38% and the strut size, 0.7 mm.
Moiduddin et al. obtained a titanium zygomatic implant with the EBM technique, using Ti-6Al-4V ELI (extra low interstitial) powder [144]. The same author [145] compared different kinds of Ti-6Al-4V ELI mandibular implants for goats: EBM plate with mesh, EBM titanium plate without mesh and a commercial reconstruction plate. They found that the reconstructed plates with mesh showed a better fit than the other ones. They obtained a very good fit with titanium alloy mandibular implants [146]. [144]. The same author [145] compared different kinds of Ti-6Al-4V ELI mandibular implants for goats: EBM plate with mesh, EBM titanium plate without mesh and a commercial reconstruction plate. They found that the reconstructed plates with mesh showed a better fit than the other ones. They obtained a very good fit with titanium alloy mandibular implants [146]. Ciocca et al. [147] built DMLS titanium alloy meshes for the regeneration of atrophic maxillary arches. They used a 0.6 mm thickness mesh. Jardini et al. [148] manufactured Ti-6Al-4V parts for maxillofacial implants, using the DMLS technology. The same technology and material was used to obtain customized parts for upper maxillary implants [149].
Spinal Implants
Yang et al. [150] used the EBM technique to obtain Ti-6Al-4V vertebral bodies of sheep. Xu et al. [151] manufactured vertebral implants with the EBM technique and Ti-6Al-4V material, and Li et al. [152] tested porous artificial vertebral bodies in vivo, manufactured with the same material and technique. Choy et al. [14] printed titanium porous vertebral prostheses and performed in vivo spinal surgery. Siu et al. [153] applied EBM to obtain Ti-6Al-4V interbody cages for the lumbar area, in a case study with a deformity caused by osteoporotic fractures.
Hollander et al. [154] used direct laser forming (DLF) to obtain Ti-6Al-4V alloy vertebral bodies (Figure 7). They manufactured meshes with nominal pore sizes of 500, 700, and 1000 μm, which were reduced by 300 μm after the process. The prostheses' surfaces allowed the growth of human osteoblasts. Ciocca et al. [147] built DMLS titanium alloy meshes for the regeneration of atrophic maxillary arches. They used a 0.6 mm thickness mesh. Jardini et al. [148] manufactured Ti-6Al-4V parts for maxillofacial implants, using the DMLS technology. The same technology and material was used to obtain customized parts for upper maxillary implants [149].
Spinal Implants
Yang et al. [150] used the EBM technique to obtain Ti-6Al-4V vertebral bodies of sheep. Xu et al. [151] manufactured vertebral implants with the EBM technique and Ti-6Al-4V material, and Li et al. [152] tested porous artificial vertebral bodies in vivo, manufactured with the same material and technique. Choy et al. [14] printed titanium porous vertebral prostheses and performed in vivo spinal surgery. Siu et al. [153] applied EBM to obtain Ti-6Al-4V interbody cages for the lumbar area, in a case study with a deformity caused by osteoporotic fractures.
Hollander et al. [154] used direct laser forming (DLF) to obtain Ti-6Al-4V alloy vertebral bodies (Figure 7). They manufactured meshes with nominal pore sizes of 500, 700, and 1000 µm, which were reduced by 300 µm after the process. The prostheses' surfaces allowed the growth of human osteoblasts.
McGilvray et al. [99] compared the performance of polyetheretherketone (PEEK), titanium-coated PEEK and 3D printed porous titanium alloy with regard to the manufacture of interbody fusions of the lumbar area of sheep. They reported higher cell ingrowth in titanium implants than in PEEK or titanium-coated PEEK cages. McGilvray et al. [99] compared the performance of polyetheretherketone (PEEK), titaniumcoated PEEK and 3D printed porous titanium alloy with regard to the manufacture of interbody fusions of the lumbar area of sheep. They reported higher cell ingrowth in titanium implants than in PEEK or titanium-coated PEEK cages.
Upper Limb Prostheses
Zou et al. [155] obtained customized macro-porous shoulder Ti-6Al-4V prostheses with the EBM technique, implanted them and observed good short-term follow-up effects. In 2017, the same process was used for the first time to manufacture a mold to cast a titanium first metacarpal hand implant [156].
Chest Implants
In 2013, Turna et al. reported the first 3D-printed chest implant [157]. It consisted of a plate for sternum and ribs. Aranda et al. obtained a more advanced implant in 2015 [158]. Aragón and Méndez manufactured a more flexible implant [159]. In 2017, a titanium chest implant was manufactured with the EBM technique, and further fixed [160], showing the versatility of the 3D-printing processes to obtain complex shapes. A clavicle was reconstructed in pure Ti by means of EBM [161].
Pelvic Implants
A pelvic specific implant was manufactured in Ti-6Al-4-V with EBM and subsequently implanted [162].
Another pelvic patient-specific implant was manufactured with a laser powder bed fusion technology [163].
Upper Limb Prostheses
Zou et al. [155] obtained customized macro-porous shoulder Ti-6Al-4V prostheses with the EBM technique, implanted them and observed good short-term follow-up effects. In 2017, the same process was used for the first time to manufacture a mold to cast a titanium first metacarpal hand implant [156].
Chest Implants
In 2013, Turna et al. reported the first 3D-printed chest implant [157]. It consisted of a plate for sternum and ribs. Aranda et al. obtained a more advanced implant in 2015 [158]. Aragón and Méndez manufactured a more flexible implant [159]. In 2017, a titanium chest implant was manufactured with the EBM technique, and further fixed [160], showing the versatility of the 3D-printing processes to obtain complex shapes. A clavicle was reconstructed in pure Ti by means of EBM [161].
Pelvic Implants
A pelvic specific implant was manufactured in Ti-6Al-4-V with EBM and subsequently implanted [162].
Another pelvic patient-specific implant was manufactured with a laser powder bed fusion technology [163].
Lower Limb Prostheses
Cronskar et al. [42] produced Ti6Al4V hip stems by means of EBM. They reported a reduction of the fatigue limit using the rough surfaces obtained by 3D printing when compared to conventional machining. Murr [164] reported a Ti-6Al-4 V porous acetabular cup, manufactured with the EBM technique ( Figure 8).
Weiβmann et al. [165] manufactured titanium alloy porous acetabular cups with the SLM technique. They tested three types of cells: twisted, combined and combined open, and found that their mechanical strength depends on the geometry of the unit cell employed, its dimensions and the volume and porosity responsible for the press fit of the prosthesis. A custom-made component of a hip implant endoprosthesis was obtained in titanium alloy with the same technique. The implant matched the anatomical features of the patient, with porous structures to favor osseointegration, and with good mechanical properties [166].
Lower Limb Prostheses
Cronskar et al. [42] produced Ti6Al4V hip stems by means of EBM. They reported a reduction of the fatigue limit using the rough surfaces obtained by 3D printing when compared to conventional machining. Murr [164] reported a Ti-6Al-4 V porous acetabular cup, manufactured with the EBM technique ( Figure 8). Weiβmann et al. [165] manufactured titanium alloy porous acetabular cups with the SLM technique. They tested three types of cells: twisted, combined and combined open, and found that their mechanical strength depends on the geometry of the unit cell employed, its dimensions and the volume and porosity responsible for the press fit of the prosthesis. A custom-made component of a hip implant endoprosthesis was obtained in titanium alloy with the same technique. The implant matched the anatomical features of the patient, with porous structures to favor osseointegration, and with good mechanical properties [166].
Croitoru et al. [168] printed porous Ti6Al4V femoral stems for a hip replacement using powder bed fusion technology (laser sintering). They found that large fenestrations confer an elastic behavior to the structure while also contributing to enhance osseointegation. Arabnejad et al. [16] manufactured a titanium alloy stem taper-wedge implant with selective laser melting (SLM) ( Figure 2e). They reported high mechanical strength with reduced stress-shielding, while the implant respected bone in-growth. Femoral implants have also been obtained with SLM in CoCrMo alloys [169].
Ruppert et al. [170] compared the performance of femoral implants manufactured by both the EBM and the SLM methods. Osseointegration was evaluated by means of mechanical testing. Coarse EBM implants showed higher removal torque than fine DMLS implants.
Murr et al. [171] made EBM porous structures for knee replacement, with Co-29Cr-6Mo alloy as the femoral and Ti-6Al-4V as the tibial component of the knee prostheses. Liu et al. used the same technique with titanium alloy as material to manufacture porous knee prostheses [172].
Dental Implants
Dental restorations have been obtained with the SLM technology [173]. Tolochko et al. used the combination of SLS and SLM to obtain titanium dental implants [174]. CoCrMo alloys have also been employed for the same purpose, with SLM processes [175][176][177]. Croitoru et al. [168] printed porous Ti6Al4V femoral stems for a hip replacement using powder bed fusion technology (laser sintering). They found that large fenestrations confer an elastic behavior to the structure while also contributing to enhance osseointegation. Arabnejad et al. [16] manufactured a titanium alloy stem taper-wedge implant with selective laser melting (SLM) (Figure 2e). They reported high mechanical strength with reduced stress-shielding, while the implant respected bone in-growth. Femoral implants have also been obtained with SLM in CoCrMo alloys [169].
Ruppert et al. [170] compared the performance of femoral implants manufactured by both the EBM and the SLM methods. Osseointegration was evaluated by means of mechanical testing. Coarse EBM implants showed higher removal torque than fine DMLS implants.
Murr et al. [171] made EBM porous structures for knee replacement, with Co-29Cr-6Mo alloy as the femoral and Ti-6Al-4V as the tibial component of the knee prostheses. Liu et al. used the same technique with titanium alloy as material to manufacture porous knee prostheses [172].
Dental Implants
Dental restorations have been obtained with the SLM technology [173]. Tolochko et al. used the combination of SLS and SLM to obtain titanium dental implants [174]. CoCrMo alloys have also been employed for the same purpose, with SLM processes [175][176][177].
Ortorp et al. [178] compared four different manufacturing techniques to obtain CrCo dental prostheses: lost wax casting, lost wax with milled wax, milling, and direct laser metal sintering (DLMS). The best fit was reported for the DLMS technique.
Implants in General
The binder jetting technology was used to manufacture stainless steel bone scaffolds [179]. Four different lattices were studied, and it was observed that mechanical strength depends on the type of lattice. Sintering time and temperature also influence mechanical strength. Porous titanium parts have been characterized in order to use them as implants [180].
The DED technique has been employed, for example, to obtain functionally graded structure in Ti-Mo alloys [181].
As an example of the extrusion processes (FFF or FDM), polylactic acid (PLA) and polyethene terephthalate (PET) polymeric filaments mixed with stainless steel 316L and copper alloy Cu-10Sn allowed for the printing of multi-material parts [182]. Fused Filament Fabrication (FFF) 1 [182] As can be observed in Table 19, the largest number of references in the present paper corresponds to titanium alloys, firstly with the EBM technique and secondly with the SLM technique. They are followed by far by other technologies like BJ, EBM and FFF. Both EBM and SLM are powder bed fusion technologies. According to Table 6, both technologies have in common high dimensional accuracy and high corrosion resistance of the parts, because of the use of inert atmospheres. SLM provides higher resolution and part complexity than EBM. However, printing speed of EBM is higher than that of SLM, and it is cheaper. In EBM, only conductive alloys are used. On the contrary, SLM can be used for different alloys such as Inconel, stainless steel, etc.
Comparison of the AM Techniques and Materials Used for Metallic Implants
In the following paragraphs, the impact of EBM and SLM techniques on biocompatibility, porosity, mechanical performance, and biodegradability of the implants is addressed.
Nowadays, the concept of biocompatibility means not only that a metal should be non-toxic but also that it should have a positive effect when interacting with living cells [183]. The three most employed materials for implants, Ti alloys, CoCr alloys and stainless-steel show high biocompatibility with the human body. However, high temperature AM processes such as EBM and SLM modify the physical, chemical and mechanical properties of the alloys, which are related to biocompatibility. In this line, Wang et al. found good haemocompatibility, no dermal irritation and no skin allergic reaction of Ti-6Al-4V alloy with both EBM and SLM processes [184]. In another comparative study between EBM and SLM processes, it was observed that SLM manufactured commercially pure titanium (CP-Ti) scaffolds presented higher cell viability and cell adhesion than EBM manufactured Ti-6Al-4V (Ti64) scaffolds [185]. The surface finish of the printed parts is an important factor influencing biocompatibility, since it affects the cell attachment, proliferation and differentiation [38]. Low roughness values below 2.0 µm were reported to improve bone regeneration in titanium implants [186]. However, SLM and EBM lead to higher roughness values of 5-20 µm and 20-50 µm respectively [187]. In order to reduce roughness and improve cell adhesion along with cell proliferation, for example, the laser polishing operation can be applied [177].
The porosity of implants is directly related to cell growth. For example, the porosity of the cancellous bone ranges from 50% to 90% [77]. As for pore size, a certain variability is required, with small pores to improve cell attachment and large pores that favor nutrient transport [78]. For example, pore size values between 200 and 1000 µm are desirable in trabecular structures [117,188]. In addition, pores should be interconnected, in order to favor permeability and nutrient transport [80]. Regarding porosity, EBM combined with hot isostatic pressing achieved density values that were higher than 99%, while SLM did not exceed 97% [189]. Heinl al. manufactured different Ti-6Al-4V porous structures with interconnected porosity for bone implants, using selective electron beam melting (SEBM) [190]. Xue et al. used laser engineered direct shaping (LENS, a DED AM technology) to manufacture titanium porous implants for bone replacement [191]. Similar structures to those of the cancellous bone have been printed in titanium with the SLM technique [192].
High mechanical strength is important to protect patients with implants from fractures [193]. Titanium and some of its alloys have good mechanical properties, including high strength, a quite suitable elastic modulus, high fracture toughness and high fatigue strength [194]. However, additive manufacturing processes affect the properties of the material. For example, the compression strength of titanium aluminides obtained with the EBM technique, with preheating of the material and vacuum surrounding, were similar to those of the wrought material [189]. The higher the preheating, the lower the residual stresses are in EBM. Excellent wear properties were also reported in EBM processes in the transverse direction [189]. SLM manufactured titanium alloys also presented good mechanical properties [104]. On the other hand, metal implants should mimic the elastic modulus of bones. However, usually titanium alloys have higher elastic modulus values (around 112 GPa) [195] than those of the cortical bone, ranging from 7.7 to 21.8 GPa [196,197]. For this reason, porous structures can be built that reduce the elastic modulus of solid materials [198].
Biodegradability is another important property of the metallic implants. Alloys can be divided into two groups with regard to their biodegradability: materials with high mechanical properties but lower biodegradability such as stainless steel, titanium and CrCo alloys, and metals or alloys with higher biodegradability but lower mechanical strength such as zinc, magnesium and iron [117]. For example, Ti and stainless steel structures do not degrade significantly with time, remaining in the body as a foreign object [195]. This can lead to several diseases such as infections, physical irritation, inflammatory reaction, etc. [199].
Conclusions
In recent years, additive manufacturing has been successfully incorporated into the manufacture of metallic implants, thanks to the possibility to obtain customized parts with porous structures that favor cell growth and osseointegration. The main conclusions are summarized next: (1) The most-used metals in AM manufactured implants are titanium, titanium alloys, CoCr alloys, and stainless steel, mainly because of their high mechanical properties and biocompatibility. In addition, as a general trend, they maintain their properties when the parts are additively manufactured (2) The most popular techniques to obtain AM metallic implants are EBM and SLM. Both technologies belong to the powder bed fusion group, and both of them provide high dimensional accuracy and high corrosion resistance. EBM uses higher printing speed than SLM, and it is cheaper. On the contrary, SLM allows higher resolution, better surface finish and higher part complexity than EBM.
(3) Several examples are available in the literature of cranial, mandibular, spinal, and upper & lower limb titanium alloy implants, among others, manufactured with EBM and/or SLM techniques.
(4) The use of BJ, DED, or FFF to manufacture metallic implants is still at an early stage, in which metallic structures have been obtained and characterized, but with few in vivo tests. Further research is required in order to use these technologies in implants.
The application of AM technologies to the manufacture of metallic implants is still under development. Both the improvement of the printing technologies and the research investigating new alloys will help to consolidate the use of AM technologies for this purpose. Funding: This project was co-financed by the European Union Regional Development Fund within the framework of the ERDF Operational Program of Catalonia 2014-2020 with a grant of 50% of total cost eligible, project BASE3D, grant number 001-P-001646.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2018-12-02T15:31:30.053Z
|
2018-11-01T00:00:00.000
|
53728249
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jia2.25197",
"pdf_hash": "d854f42c8dcfb9ef03359f91db4d123b4d644dfd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46406",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "d854f42c8dcfb9ef03359f91db4d123b4d644dfd",
"year": 2018
}
|
pes2o/s2orc
|
Previous incarceration impacts access to hepatitis C virus (HCV) treatment among HIV‐HCV co‐infected patients in Canada
Abstract Introduction The prevalence of hepatitis C virus (HCV) is far higher in prison settings than in the general population; thus, micro‐elimination strategies must target people in prison to eliminate HCV. We aimed to examine incarceration patterns and determine whether incarceration impacts HCV treatment uptake among Canadian HIV‐HCV co‐infected individuals in the direct‐acting antiviral (DAA) era. Methods The Canadian Co‐Infection Cohort prospectively follows HIV‐HCV co‐infected people from 18 centres. HCV RNA‐positive participants with available baseline information on incarceration history were included and followed from 21 November 2013 (when second‐generation DAAs were approved by Health Canada) until 30 June 2017. A Cox proportional hazards model was used to assess the effect of time‐updated incarceration status on time to treatment uptake, adjusting for patient‐level characteristics known to be associated with treatment uptake in the DAA era. Results Overall, 1433 participants (1032/72% men) were included; 67% had a history of incarceration and 39% were re‐incarcerated at least once. Compared to those never incarcerated, previously incarcerated participants were more likely to be Indigenous, earn <$1500 CAD/month, report current or past injection drug use and have poorly controlled HIV. There were 339 second‐generation DAA treatment initiations during follow‐up (18/100 person‐years). Overall, 48% of participants never incarcerated were treated (27/100 person‐years) compared to only 31% of previously incarcerated participants (15/100 person‐years). Sustained virologic response (SVR) rates at 12 weeks were 95% and 92% respectively. After adjusting for other factors, participants with a history of incarceration (adjusted hazard ratio (aHR): 0.7, 95% CI: 0.5 to 0.9) were less likely to initiate treatment, as were those with a monthly income <$1500 (aHR: 0.7, 95% CI: 0.5 to 0.9) or who reported current injection drug use (aHR: 0.7, 95% CI: 0.4 to 1.0). Participants with undetectable HIV RNA (aHR: 2.1, 95% CI: 1.6 to 2.9) or significant fibrosis (aHR: 1.5, 95% CI: 1.2 to 1.9) were more likely to initiate treatment. Conclusions The majority of HIV‐HCV co‐infected persons had a history of incarceration. Those previously incarcerated were 30% less likely to access treatment in the DAA era even after accounting for several patient‐level characteristics. With SVR rates above 90%, HCV elimination may be possible if treatment is expanded for this vulnerable and neglected group.
| INTRODUCTION
In light of significant advances in combination antiretroviral therapy (cART) resulting in dramatic reductions in AIDSrelated morbidity and mortality, liver disease has emerged as the leading cause of death among people living with HIV primarily due to hepatitis C virus (HCV) co-infection [1][2][3]. Due to shared routes of transmission, global estimates indicate that 2.3 million people are co-infected with HIV and HCV, with the greatest burden in eastern Europe and central Asia followed by sub-Saharan Africa [4]. Worldwide, approximately 60% of co-infected people have injected drugs, many of whom have spent time in some form of correctional facility during their lifetimes [4]. Several HCV-mono and HIV-HCV coinfected sub-populations, including people who inject drugs (PWID), have failed to benefit from treatment expansion efforts despite being disproportionately affected [5,6]. Given the heterogeneity of those infected by HCV, experts are encouraging the "micro-elimination" of HCV, whereby specific and effective treatment interventions are directed towards individual sub-populations such as PWID or people in prison [7].
Due to a high lifetime prevalence of injection drug use (IDU), incarcerated populations are disproportionately burdened by chronic HCV [8]. Approximately one-third of the 11 million people imprisoned worldwide at any given time have been previously exposed to HCV, with differences in country-level estimates related primarily to geography and prevalence of IDU [9,10]. In the United States, the correctional population represents one-third of all national HCV cases [11], underscoring the importance of systematic HCV screening, improved linkage to HCV care following release and expanded treatment efforts within and outside prison settings.
Currently, limited data exist on linkage to care and treatment initiation in the direct-acting antiviral (DAA) era for HCV-mono-or HIV-HCV co-infected individuals from correctional facilities outside the United States [12,13]. Given the important contribution of incarceration on perpetuating the HCV epidemic [14] and the availability of curative DAA therapy, prioritizing the treatment of people in and recently released from prison with chronic HCV will be essential to achieve the 2030 HCV elimination goals set by the World Health Organization [15]. The aim of this study was to examine incarceration patterns among HIV-HCV co-infected persons in Canada and to determine whether a history of incarceration impacts HCV treatment uptake in the DAA era.
| Study population
We used data from the Canadian Co-infection Cohort Study (CCC; CTN222), a prospective multicentre study recruiting patients 16 years of age and older with documented HIV infection (HIV seropositive by enzyme-linked immunosorbent assay (ELISA) with western blot confirmation) and with chronic HCV infection or evidence of HCV exposure (e.g. HCV seropositive by ELISA with recombinant immunoblot assay II or enzyme immunoassay confirmation, or if serologically false-negative, HCV RNA positive). From April 2003 to 30 June 2017, 1788 patients were enrolled from 18 sites across six Canadian provinces. Participating centres included large urban tertiary care hospitals, community-based HIV clinics and street outreach programmes in urban and semi-urban settings in an attempt to capture a representative population of co-infected patients in care. All eligible patients were approached to participate to avoid selection bias. Cohort design and protocol have been reported in detail elsewhere [16].
| Data collection
After written informed consent was obtained, patients underwent an initial evaluation followed by study visits approximately every six months. At each visit, sociodemographic and behavioural information (including substance use, health services utilization and incarceration) were self-reported in questionnaires, medical treatments and diagnoses were collected by research personnel, and laboratory analyses were performed. The study was approved by the community advisory committee of the Canadian Institutes of Health Research (CIHR)-Canadian HIV Trials Network and by all institutional ethics boards of participating centres.
| Incarceration patterns
In order to assess incarceration patterns, we selected participants who had available information on history of incarceration at enrolment and at least two cohort visits. We compared baseline sociodemographic, behavioural and clinical characteristics between patients with and without a history of incarceration at enrolment. Comparisons were made using a Fisher's exact test for binary variables, a chi-squared test for categorical variables and a Wilcoxon rank-sum test for continuous variables.
Time to incarceration during study follow-up was assessed separately among patients with and without a history of incarceration at enrolment using the Kaplan-Meier method and a comparison was made using a log-rank test. Eligible patients were followed from enrolment until they first became incarcerated during follow-up. Patients who were never incarcerated during the study period were censored at death, loss to follow-up (no visits for more than 1.5 years), withdrawal of consent or at administrative censoring on 30 June 2017, whichever occurred first. Rates of incarceration and median time to incarceration were reported with their 95% confidence intervals (CI).
| Treatment uptake
Second-generation DAAs (starting with simeprevir) were first approved by Health Canada on 21 November 2013. Access to second-generation DAAs according to incarceration history was assessed among a subgroup of patients who: (1) had available information on history of incarceration at enrolment, (2) were HCV RNA positive on or after 21 November 2013 and (3) did not die, withdraw consent, become lost to followup, successfully cure their HCV infection through treatment or initiate a second-generation DAA prior to 21 November 2013.
Patients were followed-up from 21 November 2013 or upon enrolment into the cohort, whichever occurred later. Follow-up ended if an eligible treatment was initiated or if patients were censored. Censoring was applied at the earliest date that any of the following occurred: (1) spontaneous clearance of HCV, (2) death, (3) withdrawal of consent, (4) loss to follow-up, (5) initiation of a treatment that did not contain a second-generation DAA or (6) administrative censoring on 30 June 2017.
Time to DAA treatment uptake was modelled with a multivariate Cox proportional hazards model using robust standard errors. The exposure of interest was time-updated incarceration history. The following adjustment covariates, known to be associated with treatment uptake in the DAA era, were chosen a priori and measured at the cohort visit closest to the start of study follow-up: age, sex, Indigenous ethnicity, monthly income (≤1500 Canadian dollars (CAD)), a history of IDU, current IDU (within the past six months), hazardous drinking in the past six months (as defined by the AUDIT-C [17]), history of psychiatric diagnosis (depression, bipolar disorder, schizophrenia, personality disorder) or hospitalization, HCV genotype 3, advanced liver fibrosis (based on an aspartate-to-platelet ratio index (APRI) greater than 1.5 at any time prior to the start of study follow-up), undetectable HIV viral load (≤50 copies/mL) and Canadian province. When adjusting for province, British Columbia was used as the reference with individual indicators for Saskatchewan and Quebec, and a combined indicator for Ontario, Alberta and Nova Scotia. This reflects the regional differences in criteria for access to, and reimbursement of, DAA therapies for co-infected patients during the study period. Specifically, reimbursement criteria based on the level of liver fibrosis varied across provinces with Quebec having the most liberal policies [18]. All analyses were conducted using R statistical software [19].
| Participant sociodemographic and clinical characteristics
A total of 1433 HIV-HCV co-infected patients were included following the exclusion of those with missing baseline incarceration information (n = 107) and those with fewer than two cohort visits (e.g. recently enrolled; n = 248). Of those remaining, 67% (955/1433) had a history of previous incarceration. Patient sociodemographic, behavioural and clinical characteristics stratified by incarceration history are presented in Table 1. Compared to those who were never incarcerated, previously incarcerated patients were younger, more likely to report Indigenous ethnicity, earn less than $1500 CAD per month, be homeless or live in a shelter, and report current or a history of IDU and current use of other drugs. With respect to HIV infection, those with a history of incarceration were less likely to be on cART and be virally suppressed, and were more likely to have a lower median CD4+ T-cell count. Regarding HCV infection, patients with a history of incarceration were more likely to have genotype 3 infection and have longer durations of infection despite similar proportions of advanced liver fibrosis. Those previously incarcerated also reported more frequent use of healthcare services. Furthermore, 23% reported IDU and 23% having had a tattoo done while in prison.
| Incarceration patterns
Among the 955 patients with a history of incarceration, 368 (39%) were re-incarcerated at least once during follow-up, with an incidence rate for first re-incarceration of 11.3 per 100 person-years (95% CI: 10.2 to 12.5). In contrast, among the 478 patients with no history of incarceration, 35 (7%) were incarcerated during follow-up, with an incidence rate for first incarceration of 1.6 per 100 person-years (95% CI: 1.1 to 2.2). Figure 1 shows the Kaplan-Meier survival curves for time to incarceration stratified by incarceration history. Patients with a history of incarceration were significantly more likely to be incarcerated during follow-up than those without a history of incarceration. The median time to re-incarceration among those previously incarcerated was 7.5 years (95% CI: 5.5 to 8.9).
| Treatment uptake
A total of 963 (54%) cohort participants met all eligibility criteria for the analysis of time to DAA uptake ( Figure 2). During follow-up, 339 patients started an eligible second-generation DAA treatment course (18 treatments per 100 person-years), of which 96% were interferon-free. The remaining patients were censored due to loss to follow-up (n = 175), death (n = 50), the initiation of a treatment that did not contain a second-generation DAA (n = 17), spontaneous clearance of HCV (n = 16), withdrawal of consent (n = 11) or administrative censoring (n = 355).
Overall, 48% (125/263) of participants with no history of incarceration were treated (27 treatments per 100 personyears) compared to 31% (214/700) of previously incarcerated participants (15 treatments per 100 person-years). Sustained virologic response (SVR) rates at 12 weeks were 95% and 92% respectively. Table 2 presents the adjusted hazard ratio (aHR) estimates from the analysis of time to treatment uptake. Independent of other factors included in the multivariable model, timeupdated incarceration was associated with a lower risk of treatment initiation (aHR: 0.7, 95% CI: 0.5 to 0.9). Other factors associated with lower risk of treatment uptake included IDU in the last six months, a monthly income of less than $1500 CAD and residency in Saskatchewan (compared to residency in the province of British Columbia). Patients with advanced fibrosis (APRI > 1.5), undetectable HIV viral loads and residency in Quebec were more likely to be treated for HCV. There was no evidence of effect modification by sex (interaction term between sex and incarceration status (aHR: 1.1, 95% CI: 0.6 to 2.0)).
| DISCUSSION
Our study offers the first description of incarceration patterns and the effects of incarceration on HCV treatment uptake in a large HIV-HCV co-infected cohort. Not only was the majority (67%) of our cohort previously incarcerated, but we observed a high re-incarceration incidence rate. Results from our study also provide evidence that previous incarceration is an important patient-level barrier to HCV treatment initiation in the DAA era among HIV-HCV co-infected persons in Canada even after accounting for several patient-level characteristics. While it is probable that the high re-incarceration rates may have impacted treatment initiation, it is possible that other unmeasured social determinants or behavioural attributes among those with a history of incarceration (e.g. mistrust in the health system, psychological distress or food and housing insecurities) may have contributed to the observed lower rates of treatment. This is despite engagement of HIV-HCV co-infected populations in HIV care, facilitating their identification for HCV treatment, and the absence of restrictions for DAA uptake for co-infected persons based on sociodemographic or behavioural risk factors in Canada [18]. In addition to increased interactions with the correctional system, we found that previously incarcerated HIV-HCV persons have increased urgent medical care visits. These frequent interactions with both correctional services and healthcare systems represent missed opportunities for linkage to HCV care.
While strategies aimed at increasing access to HCV treatment should be explored for people in prison with chronic HCV [20], simultaneously, several factors including high turnover rates owing to short incarcerations, frequent prison transfers and the high cost of DAAs require consideration before treatment is initiated [21]. We found equivalently high SVR rates among those with or without an incarceration history, suggesting that the decision to initiate treatment for individuals with a history of incarceration should not be based on a provider's perceived risk of treatment failure. Several countries including Canada, the United States and Australia have recently begun to prioritize treatment of inmates with sentences that allow for the completion of DAA therapy during incarceration [22]. This is a reasonable approach owing to lower SVR rates among inmates who are initiated on treatment but who are subsequently transferred or released [23]. Given the recent prioritization of HCV treatment for people in federal prisons in Canada [22], where sentences are greater than two years, the results of our studythat HIV-HCV coinfected persons with a history of incarceration experience decreased HCV treatment uptakelikely reflect both deficiencies in HCV treatment programmes in Canadian correctional facilities and a lack of linkage to HCV care at the time of release. Strengthening linkage to HCV care at the time of release is of paramount importance if micro-elimination of HCV is to occur among people in prison. Recent HCV cascade analyses demonstrate that linkage to care rates following release vary between 9 and 33% in the United States [12,13,24], implying that linkage is the rate-limiting step for treatment uptake for many people in prison with chronic HCV. Interestingly, Hochstatter et al. found that released inmates were more likely to link to care if they received any HCV care while incarcerated [12]. This echoes findings that prison-based multidisciplinary care is associated with improved engagement along the HCV care cascade and patient-reported outcomes [25,26]. These results have important implications on prison care strategies and suggest that if such strategies are provided by one or more members of an on-site multidisciplinary care team, linkage can be improved with minimal costs to the system [21]. Furthermore, a recent systematic review evaluating interventions to increase HCV engagement for people in prison found only one study that aimed to improve linkage following the release of inmates [27], highlighting the need for rigorous controlled trials with novel strategies for linkage to care in the DAA era. While linkage to care at the time of release may be particularly challenging for released inmates due to multiple competing priorities [28,29], several recent studies have demonstrated the feasibility of HCV linkage to care post-release programmes [13,24]. Although standard Our results highlight other missed opportunities for linkage to HCV care for persons with a history of incarceration. While we have already emphasized that any interaction with the correctional system should serve as an opportunity for linkage at the time of discharge, interactions with the overall healthcare system should serve a similar purpose. Our study found that HIV-HCV co-infected persons with a history of incarceration had twice the number of emergency department (ED) visits and hospitalizations compared to those never incarcerated. Linkage to HCV care for ED patients with known chronic HCV can be challenging for various reasons including that EDs are rarely well integrated into the greater healthcare system. That said, ensuring linkage to care for this population has the potential to decrease incident and prevalent HCV infections [30]. A recent study evaluating the HCV cascade of care among those screened for HCV in two EDs in the United States found that 61% of those who had a follow-up appointment scheduled for HCV care in the ED were subsequently linked to care [31]. While these patients were not restricted to those with an incarceration history, this study suggests that linkage to HCV care is feasible following a brief interaction with the healthcare system. A similar screening and linkage programme implemented with baby boomers in a safety net hospital in the United States found that more than 80% of patients were linked to follow-up HCV care [32], reinforcing that brief or extended interactions with the healthcare system can serve as important opportunities for linkage to HCV care.
While we have specifically emphasized linkage to HCV care at the time of release, in the context of a population with multimorbidity and significant social vulnerabilities, strengthening linkages with primary care rather than disease-specific specialty care may be the ideal long-term solution [21]. In order for those previously incarcerated to benefit from any healthcare, addressing the social determinants of health becomes particularly valuable; alleviating food and housing insecurities and facilitating employment and other income opportunities, while simultaneously ensuring access to harm reduction services at the time of release undoubtedly takes precedence for many. While primary care may be suited to address some of these challenges, post-incarceration transitions clinics have also emerged as models of care to address these specific barriers in a culturally appropriate manner [33,34]. By addressing these basic human needs together with HCV care, overall health and quality of life outcomes may improve.
Although a small proportion of correctional facilities have begun to expand prison-based linkage and treatment programmes, the majority have not yet succeeded in instituting systematic screening programmes despite long-standing WHO recommendations [35]. While our results are likely generalizable to many resource-constrained settings, HCV linkage and treatment programmes for inmates at the time of release should unlikely be prioritized if systematic screening of highrisk groups is not yet in place. In order to first expand screening, resource-limited countries should evaluate the opportunities and challenges of integrated versus vertical care models for HCV diagnosis in services such as HIV clinics, prison health services, and needle syringe and opioid substitution therapy programmes in order to prioritize those incarcerated or who may eventually become incarcerated [36]. Scaling up community-level HCV treatment and care, as prison-based HCV treatment may unlikely be provided for some time, will then require many intersecting initiatives such as negotiating price reductions, simplifying care, decentralizing care to nonspecialists to overcome human resource constraints, encouraging patient and community engagement, and increasing financial and political commitment [36]. As commitments to eliminate HCV begin to rollout in many developed and developing countries, prioritizing an HCV care package for vulnerable groups such as people in prison will be an essential part of the response.
The Canadian Co-infection Cohort comprises a diverse patient population followed at various primary and tertiary care clinics in urban and semi-urban areas in Canada and is thus representative of the co-infected Canadian population [16]. However, our study has limitations. We were unable to stratify our data based on the type of Canadian correctional facility, federal or provincial/territorial prisons. However, the Correctional Service of Canada announced that all federal inmates with chronic HCV would be eligible for HCV treatment in July 2017 [22], after our study period closed. To account for this policy change, information on type of APRI, aspartate-to-platelet ratio index; CAD, Canadian dollars; HCV, hepatitis C virus a Defined as within the last six months; b measured at the cohort visit closest to the beginning of study follow-up; c hazardous drinking was defined as an AUDIT-C score of at least 4 for males and at least 3 for females; d as measured at any time prior to the beginning of study follow-up; e psychiatric diagnoses comprised depression, bipolar disorder, schizophrenia and personality disorder.
correctional facility began to be collected in the cohort in April 2018. Another limitation is that the exact dates of incarceration and release were not known. Consequently, when measuring the time to incarceration, we could only use a proxy date for the incarceration event; namely, the date of the cohort visit at which the patient reported being incarcerated in the previous six months. Furthermore, for the same reason, it was not possible to assess the rate of DAA treatment while patients were incarcerated. Finally, our results are not generalizable to HIV-HCV co-infected individuals who do not access HIV care; namely, those who are not diagnosed or linked to care, representing approximately 15% and 10% respectively of the HIV-HCV Canadian co-infected population [37].
| CONCLUSIONS
In order to eliminate HCV by 2030, people in and recently released from prison must be part of the global elimination agenda. Our study identified previous incarceration as an important patient-level barrier to HCV treatment initiation in the DAA era among HIV-HCV co-infected persons in Canada. Until HCV care and treatment programmes become fully integrated in correctional facilities, an emphasis should be made on strengthening linkage to HCV care from incarceration or within the healthcare system itself.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2007-08-01T00:00:00.000
|
43636492
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/8/8/788/pdf",
"pdf_hash": "3440115a2e620512f8b792f0639021782a3eabca",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46409",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "3440115a2e620512f8b792f0639021782a3eabca",
"year": 2007
}
|
pes2o/s2orc
|
Consideration of the Factors Influencing the Specific Rates of Solvolysis of p-Methoxyphenyl Chloroformate
A recent correlations analysis of the specific rates of solvolysis of p-methoxyphenyl chloroformate (1) in 31 solvents using the three-term Grunwald-Winstein equation led to a sensitivity (h) towards changes in the aromatic ring parameter (I) of 0.85 ± 0.15. This value, suggesting an appreciable contribution from the hI term, is in contrast to the h value of 0.35 ± 0.19 that was reported for the parent phenyl chloroformate (2). However, for 1, only two specific rate values were available for the important fluoroalcohol containing solvents. Values are now reported for 13 additional solvents, 12 of which have appreciable fluoroalcohol content. With all 44 solvents considered, it is found that the solvolytic behavior indicated for 1 now parallels very closely that previously reported for 2.
Introduction
The Grunwald-Winstein equation (equation 1) was originally developed [1] in 1948 for the ℓog k/k 0 = mY + c (1) correlation of specific rates of solvolysis of initially neutral substrates reacting by an ionization (S N 1 + E1) mechanism. In equation 1, k and k o are the specific rates of solvolysis in a given solvent and in the standard solvent (80% ethanol), respectively, m represents the sensitivity to changes in the solvent ionizing power Y (initially set at unity for tert-butyl chloride solvolyses), and c is a constant (residual) term. It is now realized both that the scales are leaving-group dependent and that adamantyl derivatives provide better standard substrates, and a series of Y X scales are available [2]. It was immediately realized that bimolecular (S N 2 and/or E2) reactions cannot be expected to follow such a relationship because solvent nucleophilicity (N) will also be an important factor [1,3]. However, for a given type of binary solvent (such as a series of aqueous-ethanol mixtures) a linear plot based on equation 1 was frequently observed due to collinearity between the N and Y scales [4]. Such plots had m values considerable lower than unity and these values were taken as evidence for a bimolecular reaction [1,3,4].
It was further realized [3] that, in principle, the correlation could be extended (equation 2) to include ℓog k/k 0 = ℓN + mY + c (2) a term governed by the sensitivity ℓ to changes in solvent nucleophilicity (N). However, in practice, an N scale could not be developed because the appropriate m value for insertion into the equation (ℓ = 1 for the standard substrate) could not be obtained. Schleyer and Bentley [5] estimated the m values at 0.3 for the solvolyses of methyl p-toluenesulfonate, and arrived at the N OTs scale. At the present time, scales are usually based on the solvolyses of S-methyldibenzothiophenium ion [6], in which the leaving group is a neutral molecule, which is little influenced by solvent change, and the mY term can be neglected. The N T values obtained [6,7] indicated that the m for methyl p-toluenesulfonate is best set at 0.55 and revised N ' OTs values are in good agreement with N T values [6,7]. When aromatic rings are bonded, at the transition state, to the carbon which is developing positive charge, the charge will be partially distributed into the aromatic rings. This causes changes in the solvation of the rings in going from the substrate to the activated complex [8], which in turn leads to a perturbation of analyses in terms of equation 1 or 2. This can be accommodated by use of similarity model scales in which the standard substrate contains similarly situated aromatic rings [9,10] and new ionizing power scales are devised. Alternatively, a third variable term can be added to the linear free energy relationship (equation 3), governed by the sensitivity h to changes in the aromatic ring parameter (I).
ℓog k/k 0 = ℓN + mY + hI + c The development and uses of extended forms of the Grunwald-Winstein equation was recently [11] reviewed in more detail than is presented in this manuscript.
In recent correlations [11], using the three forms of the Grunwald-Winstein equation (equations 1-3), evidence was found for a modest hI contribution in the solvolyses of N,N-diphenylcarbamoyl chloride even although the aromatic rings are not directly attached to the carbon at the reaction center. This gave support to the claim by Liu [12], based on both experimental and theoretical considerations, that in these solvolyses positive charge is transferred to the aromatic rings through contributions from non-canonical resonance structures. If such an effect can be operative in the solvolyses of aromatic carbamoyl chlorides (Ar 2 NCOCl), it could also be present in the solvolyses of aromatic chloroformate esters (ArOCOCl), such as p-methoxyphenyl chloroformate (1) or phenyl chloroformate (2). While the h value of 0.35 + 0.19 was essentially negligible for 2, a much larger value of 0.85 + 0.15 was calculated for 1 [11]. However, it was pointed out that the 31 solvents used in the analyses of the specific rates of solvolysis of 1 included only two with a fluoroalcohol component. Fluoroalcohols are extremely important, either as pure solvents or as a component of binary mixtures, in studies leading to analyses in terms of the Grunwald-Winstein equations [13][14][15]. Accordingly, it was suggested [11] that a more detailed investigation of the solvolyses of 1 was desirable. In this contribution we have augmented the study of the specific rates of solvolyses of 1 by adding additional solvents, with almost all of them having an appreciable fluoroalcohol component.
Results and Discussion
The solvolyses of 1 can be expressed as in Scheme 1. Values for the specific rates of solvolyses at 25.0 o C were previously available for 31 pure and binary solvents [11] and 13 additional values, presented in Table 1, have been determined. Twelve of the new determinations are in solvents with appreciable fluoroalchohol content.
Correlations with all 44 solvents are considerably improved, primarily due to an improved variety of solvents, as regards the relationship between N T and Y Cl values, and only secondarily due to an increase in the number of data points. Of the binary mixtures with water, five involve an appreciable proportion of 2,2,2,-trifluoroethanol (TFE) and four an appreciable proportion of 1,1,1,3,3,3hexafluoro-2-propanol (HFIP). In addition, five binary compositions involve mixtures of TFE and Table 1 also includes the additional N T [7], Y Cl [16,17], and I [18] values needed within the correlation analysis. The correlation analyses have been carried out in terms of equation 2 and 3. A major goal of the analyses is to examine the extent of the improvement (if any) in going from application of equation 2 to application of equation 3, involving the absence or presence of the hI term. The results of the correlations are presented in Table 2. For comparison, the results reported earlier [11] for the solvolyses of 1 in 31 solvents and for the solvolyses of 2 in 49 solvents are both included in the table. Also, the correlation of the specific rates of solvolysis of 2 is reported with restriction to exactly the same 44 solvents used in the correlation with 1 as the substrate.
The correlation of the specific rates of solvolysis of 1 gave a good correlation in terms of equation 2 (Figure 1), which showed virtually no improvement in the multiple correlation coefficient (0.981 to 0.982) on advancing to the application of equation 3. Further the F-test value fell appreciably (517 to 359). In particular, the h value of 0.29 + 0.18 was much lower than the 0.85 + 0.15 reported for 31 solvents and it was associated with a large (0.114) probability that the hI term was statistically insignificant. With the application of equation 2, the multiple correlation coefficient improves considerably (0.964 to 0.981) on inclusion of the 13 data points from Table 1. The values in Table 2 illustrate the need for a good selection of solvents for a meaningful application of extended forms of the Grunwald-Winstein equation. The observed ℓ and m values are within the range previously observed for other reactions at acyl carbon which are believed to proceed by an addition-elimination (association-dissociation) mechanism (shown for 1 in Scheme 2 below), with the addition step ratedetermining [11,19,20]. Probability that the hI term is not statistically significant, presented when greater than 0.001. h As reported in ref. 11. i Using the same solvents as for the 44 data-point correlation of the specific rates of solvolysis of 1. Table 2). This observation suggests that a very good direct linear relationship should exist between their specific rates of solvolysis. It can be seen from Figure 2 that this is indeed the case and a plot of log (k/k o ) values for 1 against those for 2 gives an excellent linear plot with a correlation coefficient of 0.998, F-test value of 9302, slope of 0.991 + 0.010, and intercept of 0.075 + 0.015.
Conclusions
The presently reported analyses strongly support the proposal of very similar mechanistic characteristics for the solvolyses of 1 and 2. They demonstrate that the previous indication [11] of a meaningful hI contribution associated with the extended Grunwald-Winstein treatment of 1 but not of 2 was, as suspected at the time, an artifact, resulting from an inadequate selection of solvents being available when the specific rates of solvolysis of 1 were treated in terms of equation 3. With the addition of data for the solvolysis in several fluoroalcohol-containing solvents, the linear free energy relationship behavior becomes essentially identical to that previously observed for 2.
Experimental Section
The p-methoxyphenyl chloroformate (Aldrich, 98%) was used as received. Solvents were purified and the kinetic runs carried out as described previously [6]. A substrate concentration of approximately 0.03 M was employed. The calculation of the specific rates of solvolysis (first-order rate coefficients) used the experimental infinity titers, at about ten half-lives, except for the runs in 97% HFIP, when portions were added to equal volumes of water and allowed to stand for 4 weeks prior to the usual titration of developed acid, and for the runs in 97% TFE, when the conventional Guggenheim treatment [21] was modified [22] so as to give the infinity titer, which was then used to calculate for each run a series of integrated rate coefficients. The specific rates and associated standard deviations, as presented in Table 1, are obtained by averaging all of the values from, at least, duplicate runs.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-10-01T00:00:00.000
|
12489213
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "BRONZE",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1460-9568.2011.07862.x",
"pdf_hash": "fbbcc26687a05b5ce6a9699721ab61bda0aeda84",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46410",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "3f2770197bc942d642e1237043f3e855e9897f37",
"year": 2011
}
|
pes2o/s2orc
|
Combined delivery of Nogo-A antibody, neurotrophin-3 and the NMDA-NR2d subunit establishes a functional ‘detour’ in the hemisected spinal cord
To encourage re-establishment of functional innervation of ipsilateral lumbar motoneurons by descending fibers after an intervening lateral thoracic (T10) hemisection (Hx), we treated adult rats with the following agents: (i) anti-Nogo-A antibodies to neutralize the growth-inhibitor Nogo-A; (ii) neurotrophin-3 (NT-3) via engineered fibroblasts to promote neuron survival and plasticity; and (iii) the NMDA-receptor 2d (NR2d) subunit via an HSV-1 amplicon vector to elevate NMDA receptor function by reversing the Mg2+ block, thereby enhancing synaptic plasticity and promoting the effects of NT-3. Synaptic responses evoked by stimulation of the ventrolateral funiculus ipsilateral and rostral to the Hx were recorded intracellularly from ipsilateral lumbar motoneurons. In uninjured adult rats short-latency (1.7-ms) monosynaptic responses were observed. After Hx these monosynaptic responses were abolished. In the Nogo-Ab + NT-3 + NR2d group, long-latency (approximately 10 ms), probably polysynaptic, responses were recorded and these were not abolished by re-transection of the spinal cord through the Hx area. This suggests that these novel responses resulted from new connections established around the Hx. Anterograde anatomical tracing from the cervical grey matter ipsilateral to the Hx revealed increased numbers of axons re-crossing the midline below the lesion in the Nogo-Ab + NT-3 + NR2d group. The combined treatment resulted in slightly better motor function in the absence of adverse effects (e.g. pain). Together, these results suggest that the combination treatment with Nogo-Ab + NT-3 + NR2d can produce a functional ‘detour’ around the lesion in a laterally hemisected spinal cord. This novel combination treatment may help to improve function of the damaged spinal cord.
Introduction
Several obstacles are known to prevent recovery of spinal cord function after even less-than-complete transection of the adult mammalian spinal cord. These include neurite growth-inhibiting constituents of myelin, scar-associated inhibitory factors, and the lack of sufficient neurotrophic support (Snow et al., 1990;Schnell et al., 1994;Tuszynski & Gage, 1995;Fawcett, 2009). Whereas treatments targeting these processes individually have proven somewhat efficacious in facilitating axon regeneration and functional recovery after spinal cord injury, the effects are generally small. Therefore, it is conceivable that developing combination treatments, e.g. neutralizing the myelin-related inhibitory molecules, combined with growth-and plasticity-enhancing factors, could be an important step in improving repair of the damaged spinal cord. This is addressed in the present study in an attempt to promote the re-establishment of a functional detour around a hemisection (Hx).
The best-studied myelin-associated inhibitory molecule is Nogo-A. Acute inactivation by function-blocking antibodies or Nogo receptor antagonists, or blockade of the downstream signals RhoA or ROCK, enhances both regeneration of lesioned fibers tracts and compensatory sprouting of the spared corticospinal tract and other fibers (Schwab, 2004;Yiu & He, 2006;. The effects of antibody-mediated neutralization of Nogo-A on ventrolateral funiculus (VLF) axons have not been studied to date.
The neurotrophic factor neurotrophin-3 (NT-3) has been shown to enhance regenerative sprouting of lesioned corticospinal, dorsal root and reticulospinal fibers in the injured spinal cord (Schnell et al., 1994;Tuszynski & Gage, 1995;Bregman et al., 2002;Alto et al., 2009). Earlier electrophysiological studies have revealed that synaptic connections from the VLF to individual motoneurons in uninjured young (postnatal days 2-10) rats could be strengthened by administration of NT-3 (Arvanian et al., 2003). However, this action of NT-3 required reversing the developmental loss of NMDA receptor activity due to Mg 2+ block by enhancing expression of the NMDA-receptor 2d (NR2d) regulatory subunit in motoneurons using Herpes simplex virus (HSV-1) amplicon-mediated delivery of NR2d (Arvanian et al., 2004).
In the current study we used a unilateral spinal cord Hx (corresponds to Brown-Sequard lesion in humans) in adult rats as a model for partial injuries because there is a clear lesion of one entire side of the cord with intact fibers remaining on the contralateral side. We examined whether intrathecal administration of a function-blocking anti-Nogo-A monoclonal antibody (Nogo-Ab) combined with long-term application of NT-3 (via fibroblasts) and transient facilitation of NMDA-receptor function (HSV-1-mediated viral delivery of the NMDA-nr2d subunit) could enhance re-establishment of functional synaptic connections from the transected lateral funiculi around the Hx lesion to ipsilateral lumbar motoneurons. Under these conditions novel responses were observed that differed from those in the uninjured cord in exhibiting markedly longer latency and higher electrical threshold. These positive electrophysiological findings were supplemented by anatomical studies showing long propriospinal axons crossing to the opposite side of the cord after the combination treatment. These changes were accompanied by mild improvement in recovery of motor function.
Portions of these results have been published in abstract form (Arvanian et al., 2006a;Schnell et al., 2007).
Animals and experimental design
These studies were performed in accordance with protocols approved by the Institutional Animal Care and Use Committees at SUNY ⁄ SB, University of Zurich, and Northport VAMC, USA. The design of experiments is presented in Fig. 1. A total of 126 adult female Sprague-Dawley rats (Charles River Laboratories, Wilmington, MA, USA; approximately 200 g) were housed in groups of four to six animals in standardized cages on a 12-h light-dark cycle with food and water ad libitum. Because of the large number of experimental groups (nine groups for the electrophysiological study, six groups for the behavioral study and five groups for the fiber tracing studies; see Results for details), the results were obtained in two separate experimental studies performed by the same investigators at two different times using the same suppliers for rats and the same treatment agents. Generally rats were pre-trained for 4 weeks to obtain baseline values in behavioral tests and then randomly divided into experimental groups according to the treatment. The rats were coded with random numbers and rats from the different groups were mixed in the cages. The experimenters were blind with regard to treatment throughout all phases of the experiment. Subsequent to surgery and treatment and in most cases behavioral evaluation, rats were used for electrophysiological recording or anatomical evaluation. However, not all treatment groups could be evaluated using all analyses (see Results). A general time line for these experiments is displayed in Fig. 1A.
Exclusion of animals
Seven animals were excluded from the study because evaluation of the lesion post hoc (see Fig. 1B) revealed that it was too small or too big -in three animals we detected a portion of spared ipsilateral dorsal white matter, while in four animals overhemisection extended beyond the midline for > 10% of spared area of hemicord. Other rats were eliminated either for general health problems, especially autophagia, or because they expired during the in vivo electrophysiological recordings (n = 16).
Surgical procedures and delivery of agents in combination treatment
In this study we used a lateral hemisection spinal cord injury model. This model allows electrophysiological evaluation of the the possibility of establishing a functional detour around the lesion. Moreover, unilateral injections of the anterograde tracer permit visualization of midline-crossing fibers rostral to the lesion and recrossing fibers caudal to the lesion (see below). Finally, transmission deficits in the chronically hemisected spinal cord coincide with clear behavioral impairments in challenging motor tasks, including irregular ladder and narrowing beam, although rats exhibit a robust recovery of their ability to walk in the open field (Arvanian et al., 2009).
After pre-training on the behavioral tasks for 4 weeks, rats were deeply anesthetized and the lateral Hx was carried out at T10 as previously described (Arvanian et al., 2009;Hunanyan et al., 2010). Briefly, a dorsal laminectomy was performed to expose segment T10 of the spinal cord. A 1-mm slit was made in the dura at the midline at T10. A complete Hx of the left hemicord at T10 was carried out with the tip of an iridectomy scissor blade, as follows: first, a 32-gauge needle was inserted through the midline from dorsal to ventral; then one tip of the scissors was pushed along the needle through the entire thickness of the spinal cord and the left dorsal and ventral columns were cut; finally one tip of the scissors was guided along the lateral surface of the spinal cord (down to the midline) and any uncut tissue in the left lateral and ventral columns was cut.
A fine intrathecal catheter (32-gauge) was inserted from lumbar level L2 ⁄ L3 and pushed up to T10 to deliver the Nogo-Ab from an osmotic minipump (Alzet ª 2ML2; 5 lL ⁄ h, 3.1 lg ⁄ lL) for 2 weeks. The tubing connecting the catheter with the minipump was sutured to the back muscles for stabilization. Antibody treatment was started immediately after the lesion by rinsing the wound with approximately 1 lL of the corresponding antibody. We used Nogo-Ab 11C7 (3.1 mg ⁄ mL) and monoclonal mouse IgG directed against wheat auxin as control antibody. Multiple studies have revealed the excellent distribution and penetration of anti-Nogo antibodies infused intrathecally throughout the spinal cord of adult rats and monkeys (Weinmann et al., 2006). Function-blocking Nogo-Abs are also currently being applied intrathecally to spinal cord-injured patients in an on-going clinical trial (ATI-355 trial; Novartis, Basel, Switzerland).
Rat fibroblasts genetically modified to produce NT-3 (0.4 · 10 6 cells ⁄ lL) or b-galactosidase (control) were suspended in 0.6% glucose-PBS and a cell volume of 2 lL, inserted into collagen plugs (Kawaja & Gage, 1992;McTigue et al., 1998;Arvanian et al., 2003) and placed on top of the lesion. These earlier studies demonstrate that this procedure results in biological effects specific to the released neurotrophin and elevation of neurotrophin levels in the spinal cord days to weeks later.
HSV-1 amplicons encoding NR2d or control b-galactosidase were administered in two injections of 1 lL each (approximately 10 4 viral particles ⁄ lL) into the left and right ventral horn at T11 caudal to the injury region. We used a glass capillary with a tip of approximately 60 lm (calibrated for a volume of 1 lL) inserted into each side of the cord dorsum 1 mm lateral to the midline. HSV-1 amplicons have a Functional 'detour' in the hemisected spinal cord 1257 transgene capacity sufficient to carry the NR-2d cDNA and the co-expressed green-fluorescent protein (GFP) reporter gene (Arvanian et al., 2004). Previous electrophysiological studies have revealed that delivery of HSV-1 amplicon-based vectors themselves does not alter synaptic function in hippocampus (Dumas et al., 1999) or spinal cord (Arvanian et al., 2004(Arvanian et al., , 2006b, thus supporting the use of the HSV-1 amplicon system as a safe method for delivering selected genes to the central nervous system. To confirm the ability of HSV-1 to infect cells at a distance from the infection site, we measured GFP expression in identified motoneurons at different segmental levels. Because HSV-1 gene expression is highest 24-48 h after administration and decays to undetectable levels by 2 weeks (Bowers et al., 2000), we could not use the same rats that were studied electrophysiologically or anatomically 7-12 weeks after HSV-1 administration. Therefore we used a separate control group of three hemisected rats treated in an identical manner (Fig. 1C). The Note that behavioral testing began before osmotic minipump removal. (B) Representative camera lucida drawing of a cross-section from a representative cord in treatment groups used for behavioral testing. The maximal lesion area of each animal was reconstructed from at least 20 spinal cord cross-sections per animal, measured with the Image J program, and expressed as percentage of the area of the T10 segment of the intact cord. Mean ± SEM value of lesion size as a percentage of an intact cord is shown for each treatment group. Arrows point to the midline. (C) Expression of GFP in motoneurons at T12 and L5 7 days after GFP-expressing HSV-1 amplicons (HSV-NR2d-GFP) were injected intraspinally at the time of a T10 lateral hemisection: GFP (as HSV-NR2d-GFP marker; red) and Peripherin (as motoneuron marker; green). Note infection of motoneurons bilaterally at T12 and L5. Further details in text. Scale bars, 50 lm. degree of motoneuron infection was determined as the percentace of the total number of peripherin (green)-labeled cells also labeled with GFP (red), i.e. that were yellow. The immunolabeling procedure and analyses have previously been described (Arvanian et al., 2004). We found a substantial level of infection of identified motoneurons in the vicinity of the hemisection (79% in both the T5-T7 segment (rostral to Hx) and the T11-T12 segments (caudal to Hx). Even as far caudal as the L4-L6 segments infectivity of motoneurons was 67% 7 days after injection of HSV-NR2d-GFP into the left and right ventral horn at T11 (Fig. 1C). Many other cells were also infected (GFP-labeled) at these locations but their identity as neurons or glia was not verified. These findings confirm the long-distance HSV-1 propagation within the CNS and transfer to other neurons (Zemanick et al., 1991;Curanovic & Enquist, 2009), in particular motoneurons upon which both ascending and descending cells with axons in VLF have been shown to terminate, using other tracing techniques as well as electrophysiologically (Petruska et al., 2007).
In order to reduce or prevent excitotoxicity that could be mediated through activation of NMDA receptors, we delivered subanesthetic doses of ketamine in all experiments (3 mg ⁄ kg, i.m., twice per day) during the initial 2 days post-injury, when transient elevation of glutamate concentration following spinal cord injury occurs (Xu et al., 2004). The rationale for using ketamine, an NMDA receptor blocker known to be neuroprotective (Albers et al., 1989), was to minimize glutamate-induced excitotoxicity during first 2 days after the initial lesion. In this study we did not examine the effects of ketamine alone. However, the comparisons of the effects of Nogo-Ab, HSV-NR2d and NT-3 in the various combinations were performed using the same surgical procedures and under the same recording conditions, and all animals received the same post-surgery ketamine injections.
Behavior
The following tests were carried out. Motor tests: open-field locomotion, ladder rung walk, narrowing beam, swim tests. Sensory tests (withdrawal reflex): plantar heater, von Frey hairs. The performance of each animal was normalized to its own pre-injury baseline.
Open-field locomotion
This was evaluated by using the 21-point Basso, Beattie, Bresnahan (BBB) locomotor scale (Basso et al., 1995). The rats were placed in an open field (diameter 150 cm) with a pasteboard-covered floor. In each testing session the animals were monitored individually for 4 min.
Ladder rung walk
The animals were required to walk along a 1-m-long horizontal ladder elevated to 30 cm above the ground. A defined stretch of 60 cm was chosen for filming and analysis. To prevent habituation to a fixed bar distance, the bars in this sector were placed irregularly (1-4 cm spacing). The animals performed the ladder rung walk twice in the same direction and once in the opposite direction. The number of errors (any kind of foot slip or total miss) was divided by the total number of steps in each crossing, yielding the percentage of missteps (Kunkel-Bagden et al., 1993).
Narrowing beam
This paradigm assesses the ability of the rats to balance along a tapered beam 20 cm above the ground. The beam is flanked by two side boards and graded into 24 stretches of the same length but different widths, starting with 5 cm and ending with 1.5 cm width, and can be walked along easily by an intact animal. The maximum possible score in this test is 24. Animals had to walk along the beam three times.
Swim test
The setup for the swim test consisted of a rectangular Plexiglas basin (150 · 40 · 13 cm) filled with water at 23°C. The water level was high enough to prevent the rats from touching the bottom of the basin with the tail. The animals' task was to swim straight to the 60-cm-distant board which they could climb to reach the home cage. A total of five runs per rat was monitored using a mirror at 45°at the bottom of the pool to film the rats from the side and the bottom simultaneously. Velocity, forelimb stroke rate and inter-hindlimb coordination were analyzed.
Withdrawal reflex -thermal stimulation
The thermal nociceptive threshold for both hind paws was evaluated by performing a standardized plantar heater test (Hargreaves et al., 1988) using a commercially available apparatus (Ugo Basile, Comerio, Italy). Rats were placed in a Plexiglas box (17 · 23 cm) and were first allowed to adjust to the new environment. When exploratory behavior ceased, an infrared source producing a calibrated heating beam (diameter 1 mm) was placed under the hind paw and triggered together with a timer. After one initial trial, the time for the hind limb withdrawal reflection was averaged from four successive measurements. A minimum interval of 30 s was maintained between successive trials.
Withdrawal reflex -mechanical stimulation
Von Frey hairs (Semmes-Weinstein Monofilaments; Stoelting Co., Wooddale, IL, USA) with target force ranging from 0.008 to 300 N were used. Rats were placed in a Plexiglas box (17 · 23 cm) with a fine grid bottom and were first allowed to adjust to the new environment. The monofilament was pressed against the plantar surface of the foot at a 90°angle until it bowed, and held in place for 1-2 s. This stimulation was repeated up to three times in the same location. The test was performed by using increasing filament calibers until the first withdrawal reflex was noted.
Tracing
We assessed the crossing of propriospinal fibers that would project through the VLF as this was the tract that was activated electrophysiologically. Biotin dextran amine (BDA; 10%, MW 10 000) in a total volume of 1.0 lL was injected unilaterally into four sites in the left ventral horn at C4-C7 over a period of 10 min for anterograde tracing of midline-recrossing fibers. Ten days later the rats were perfused and spinal cords were removed and prepared for morphological evaluation. Alternating coronal sections were processed with Cresyl violet or were stained for BDA using a nickel-enhanced diaminobenzidine protocol. The number of fibers originating from the gray matter at C4 ⁄ C7 and traced with BDA was analyzed quantitatively using a light microscope with bright-field illumination. We assessed the numbers of midlinecrossing fibers above and below the lesion in every fourth (30-lmthick) cross-section from the entire spinal cord (i.e. in approximately 500 sections per cord). For normalization, all midline-crossing fibers were counted from T11 to S5 (below the lesion) and standardized to the number of crossing fibers at T8 (above the lesion).
Electrophysiology
Experimenters were blinded as to the treatment of each rat. Rats were deeply anesthetized using i.p. injection of ketamine (80 mg ⁄ kg, 0.5 mL) and xylazine (10 mg ⁄ kg, 0.5 mL). Heart rate and expired CO 2 were monitored continuously. Dorsal laminectomy of the spinal cord was performed at T6-T8 for placement of the stimulation electrode and L1-L6 for placement of the recording electrodes. L1-L6 ventral spinal segments were held tightly between custom-made bars, and the dorsal surface of the cord was imbedded in a 3-mm-thick agar layer to minimize movement of the cord during recordings. We recorded responses from L5 motoneurons below the lesion (T10) on the same side as the Hx. These responses were evoked by stimulation of ventrolateral white matter tracts at T6 on the same side of the cord (details in Arvanian et al., 2009). Motoneurons were identified by the antidromic response to stimulation of the cut L5 ventral root. The resting membrane potential of motoneurons used for analyses ranged from )55 to )65 mV. Peak excitatory postsynaptic potential amplitude was measured from pre-stimulus baseline to peak. Latency was measured from stimulus artifact to response onset. After completion of electrophysiological recording, the rats were perfused and spinal cords removed and prepared for morphological evaluation of the injury level.
Statistics
For the behavior experiments, two-way repeated-measures anova and pairwise multiple comparison procedures (Holm-Sidak method) were used to determine the statistical significance of the results (P < 0.05). Data from the tracing experiments were subjected to one-way anova followed by Bonferroni's post hoc pairwise comparisons (*P < 0.05).
For the electrophysiological studies, the mean maximum response from each motoneuron (50 consecutive responses per cell) was averaged over all motoneurons recorded in each rat and these averages were compared between treatment groups using one-way anova or one-way anova on ranks (means are expressed ± SEM; n = number of rats). If significant differences were observed between groups, a Student-Newman-Keuls test or Dunn's method were used for pairwise comparisons as appropriate.
Electrophysiology
The goal was to determine whether the combination treatment induced the appearance of new functional connections spanning the hemisected segment. We recorded intracellularly from motoneurons below the lesion ipsilateral to the Hx. Responses were evoked by stimulation of the ipsilateral VLF white matter above the lesion. This approach improves detection of very weak functional connections across the injury region and enables investigation of the impact of the various treatments on these connections. For electrophysiology experiments we used nine groups: one non-injured group that received all control treatments, and eight groups that received a Hx lesion and no treatment, or treatment with one, with two, or with all three components of the combination treatment; appropriate controls were administered in cases where only one or two active components were delivered. The results are from experiments conducted 7-12 weeks after the surgery with different treatment groups randomly assigned to these times in order to minimize the variability of post-operation recording time among the groups (Fig. 1).
Hx disrupted monosynaptic connections to motoneurons and additive treatments established novel polysynaptic connections
In uninjured control rats that received laminectomy and treatments with controls for all three agents in the combination treatment (Ringerfilled catheter, control fibroblasts, and control HSV-1 virus), the response in L5 motoneurons from ipsilateral T6 VLF exhibited the following properties: large peak amplitude (6.2 ± 0.8 mV), short latency (1.7 ± 0.1 ms), brief rise time and minimum fluctuation in both amplitude and latency ( Fig. 2A; n = 56 cells from seven rats). These responses reached maximum amplitude at relatively low stimulus current intensity (67.8 ± 11.5 lA, 50 ls), were similar to those recorded in L5 motoneurons from ipsilateral VLF in untreated intact adult rats (Arvanian et al., 2009), and were probably monosynaptic.
In Hx-lesioned rats that received either no treatment or control treatment, the mean response was barely distinguishable from baseline [no treatment: mean 0.2 ± 0.3 mV, n = 7, not shown; control treatment ( Fig. 2, B), mean 0.1 ± 0.2 mV; n = 5], even with VLF stimuli as intense as 600 lA at 50 ls width. When we repositioned the stimulation electrode caudal to the lesion, a typical monosynaptic response was recorded from the same motoneuron at low stimulus intensity (Fig. 2B inset). These results indicate that motoneurons below the lesion remained viable and capable of receiving inputs from surviving propriospinal fibers in the VLF below the lesion, and that the lack of transmission from above the lesion was due to a disrupted connection. Functional 'detour' in the hemisected spinal cord 1261 The striking finding was that the additive treatment (Nogo-Ab + NT-3 + NR2d) induced the appearance of large (4.7 ± 1.2 mV, n = 10 rats; Fig. 2I) responses in all (100%) injured rats. However, in contrast to the short-latency monosynaptic responses in uninjured rats, these responses exhibited a long latency (9.8 ± 1.9 ms), showed greater fluctuation in both amplitude and latency (Fig. 2, A vs. I), and required a markedly higher stimulus intensity (415 ± 53 lA, 50 ls) to evoke a maximum response. These results suggest that the functional connections established in the Hxlesioned spinal cords that received the combination treatment with Nogo-Ab + NT-3 + NR2d probably involved conduction in smaller axons than those responsible for the responses in intact preparations, required more spatial summation on interneurons and were probably multisynaptic. In rats treated with each agent alone or in pairs [with corresponding controls for the missing agent(s)], the multisynaptic responses evoked from segments above the Hx were either absent or were much smaller than in rats with the full combination treatment, even at the high stimulus currents (600 lA, 50 ls). Treatment with Nogo-Ab alone (0.8 ± 0.5 mV; Fig. 3F), or in combination with either NT-3 (1.1 ± 0.7; Fig. 3H) or NR2d (1.0 ± 0.7 mV; Fig. 3G), resulted in weaker multisynaptic connections that could be recorded in over half the rats (52-75%). Treatment with NT-3 + NR2d induced the appearance of larger responses (peak amplitude 1.9 ± 0.7 mV; Fig. 2E) but did so in only a few rats (in three rats out of 10 studied). In rats that received NR2d alone (0.2 ± 0.1 mV; Fig. 2C Blue fibers cross the midline just once, and red fibers cross the midline twice (MRCF). Scale bar in A, 100 lm (does not apply to insets from a1 and a2, which are at higher power for visualization of thin and thick fibers, respectively). * denotes treatments with more recrossing fibers than IgG control-treated preparations. In more challenging behavioral tests, i.e. swim, narrow beam and ladder rung, the full triple-treatment group showed better recovery than animals of other groups (performance of each animal was normalized to its own pre-injury baseline). As performance of all groups that received one or two treatment components was similar, the mean error bars are displayed for the NR2d-alone and the triple combination groups. *denotes times at which the full treatment group was significantly different (P < 0.05) from the partially treated groups. Functional 'detour' in the hemisected spinal cord 1263 subset of rats were noted. Reconstruction of the Hx in each case revealed no relation between the size of the lesion and the amplitude of the responses (but see Discussion).
Novel polysynaptic responses were the result of a 'functional detour' around the Hx lesion
It was important to determine whether novel polysynaptic connections established in the triple combination group travelled through the lesion area or around the Hx. Therefore, in three rats treated with the full combination treatment, the spinal cord was carefully retransected through the existing scar after recording polysynaptic responses from several motoneurons while maintaining penetration of a motoneuron. We found that these responses persisted after this procedure (Fig. 3). These results confirm that the novel responses recorded in Hx-lesioned rats receiving the full combination treatment were the result of the establishment of new connections around the hemisected cord rather than regeneration through the lesion area.
Anatomical evaluation
The primary goal was to determine a possible anatomical substrate for the novel polysynaptic responses around the Hx in the triple combination treatment which gave the most robust change in the electrophysiological studies. We were particularly interested in determining whether the electrophysiological changes are related to an increase in the number of midline crossings of fibers. Anterograde BDA tracing from injections at C4 ⁄ C7 ipsilateral to the Hx was carried out in order to assess the number of fibers that crossed the midline caudal to the Hx (Fig. 4A). This injection site allowed us to follow the projection of interneurons (i.e. long propriospinal neurons; Reed et al., 2008) to the side contralateral to the Hx and back to the lesioned side more caudally. Above the lesion, fibers crossing the midline to the contralesional side cannot be distinguished from fibers that re-cross to the ipsilesional side but below the lesion only midlinerecrossing fibers are labeled (see Fig. 4C).
Because each group consisted of 500 sections from each of three to eight cords, it was impractical to study all eight groups. Instead we compared the results of animals treated with all three agents (Nogo-Ab + NT-3 + NR2d; n = 8 rats after exclusion) to results obtained from an untreated hemisected group (IgG control mAb; n = 3 rats after exclusion) which displayed no electrophysiological evidence of a functional detour. We made similar determinations using rats treated with two agents (Nogo-Ab with either NT-3 or NR2d) that gave intermediate electrophysiological evidence of a detour, or NR2d alone that produced a minimal detour. In the unlesioned cord the number of crossing fibers between T11-L5 was very high (approximately 5000; n = 3). In Hx cords the number of crossing fibers above the lesion was also high, and we used the number of fibers at T8 to normalize for tracer injection quality. Counts of these crossing fibers (Fig. 4B) revealed that the group of animals with the combined Nogo-Ab + NT-3 + NR2d treatment was unique in having appreciable numbers of recrossing fibers (362 ± 90.1 SEM, n = 8 rats). This was significantly higher (P < 0.05) than the number observed in the other treated Hx groups. Administration of control-Ab alone yielded counts of recrossing fibers that were uniformly very low. However, in individual experiments in the three groups with intermediate treatments we observed some recrossing fibers in all preparations treated with anti-Nogo and NT-3 or NR2d and this might account for the electrophysiological evidence of the detour (Fig. 4B). In preparations treated with NR2d only we observed no recrossing fibers in three animals; this is consistent with the lack of detour observed electrophysiolog-ically. One preparation had an anomalously high number of recrossing fibers.
In agreement with the electrophysiological findings above, where no monosynaptic response in motoneurons was found in response to stimulating VLF rostral to the Hx and where a re-transection of the spinal cord did not alter the polysynaptic response, we did not find any evidence for axons that could have crossed through the lesion. We therefore consider it likely that the conduction path involves newly formed re-crossing fibers from long propriospinal axons, which connect via interneurons to the L5 motoneuron pool in the ventral horn of the ipsilesional side ( Fig. 4D; see Discussion).
Assessment of lesion size
After completion of physiological recordings or tracing experiments the lesion site was reconstructed from cross-sections and measured as a percentage of the area of the intact cord (Fig. 1B). Camera lucida drawings of six representative lesion sites from rats used for the behavioral studies are shown in Fig. 1B. It can be seen that the mean lesion size was virtually identical for all treatments. There were differences in the tissue that was spared from rat to rat but these were not systematic in the different treatments (see Discussion).
Behavioral evaluation
In order to minimize the role of extraneous factors, it was important to carry out behavioral testing on animals that arrived at the institutional animal facility from the same vendor at the same time and received treatment using the same lot of compounds. The need to do surgery and behavioral testing on a single group of animals placed a limit on the number of animals that could be studied. Thus we limited this experiment to six groups (vs. nine groups in the electrophysiology experiment). The groups were chosen to cover the range of results obtained in the initial electrophysiology experiments. All rats were pre-trained for 4 weeks, then received injury and treatment within 4 days and behavioral testing for the following 6 weeks using four motor and two sensory tests.
Two days after the operation, the rats were scored with the BBB test (Basso et al., 2002) to assess the extent of the lesion. The left leg did not score > 3 in any group while the right hindlimb was mostly able to support the body weight and perform plantar stepping, which is equivalent to a score of 8 or higher. In the course of a 3-week recovery the different groups improved their performance gradually by approximately 3-4 points and reached a plateau of 12 points (which is just below the score for coordinated forelimb-hindlimb stepping), with no significant difference between the groups (Fig. 5A). Although the quasi-quantitative protocol of BBB scoring is useful for evaluating the loss of function and recovery following injury, it has a major disadvantage in assessing the subtle improvements that result from treatments after thoracic Hx in rodents because of the robust spontaneous recovery of locomotor function that takes place after this type of injury (Courtine et al., 2008;Arvanian et al., 2009). Therefore all animals were also tested in more challenging tests such as the symmetry of swimming, narrowing beam and horizontal ladder paradigms ( Fig. 5B and C; see below).
In the swim test there was no difference in swimming speed between the groups (P > 0.05), as the Hx lesion allowed a relatively fast recovery of performance in this weight-supported test. We therefore focused on inter-hindlimb rhythm and measured the difference in beat duration (time for a complete stroke) between the right and the left hindlimb. In healthy, uninjured animals this difference is close to zero as the legs beat in a very regular pattern. After a lateral Hx lesion the animals exhibited 'limping', which is better observed and measured during swimming than in overground locomotion (BBB test). This interhindlimb coordination remained disturbed over the entire 42-day period following the lesion, but the difference in beat duration was smallest in the triple-combination treatment group (Fig. 5B). While the performance among the groups with one or two treatment components was similar (P > 0.05), the performance of the rats from the triple combination treatment group was significantly better than these other groups (P < 0.05).
In the narrow beam test, unlesioned animals normally reach the narrow end of the scaled bar without missteps. After the lesion, this capability was greatly reduced. Normalized values revealed that the triple combination treatment group performed significantly better than groups with one or two treatment components (P < 0.05). The performance among groups with one or two treatment components was similar (P > 0.05).
In the horizontal ladder rung test, the ability of the animals to place their hindpaws on the same rung as the forepaws was greatly reduced on the ipsilesional side, where hardly any successful hindpaw placements were performed (Fig. 5D). Significantly better performance was evident in the triple combination treatment group compared to the other groups tested at post-operative day 28; this difference was maintained at day 42, the last time point tested (P < 0.05). The performance among groups with one or two treatment components was similar (P > 0.05).
In order to evaluate the effects of the treatment on the sensitivity to nociceptive stimuli, we performed standardized von Frey filament and plantar heater tests. Over the 7, 21 and 42 days post-operation time points tested, treatment groups were indistinguishable from each other (Fig. 6).
Discussion
This study has revealed that a functional 'detour' can be established around a Hx, from the lesioned ventrolateral white matter above to ipsilateral motoneurons below, using a novel combination treatment with the following components: an antibody to the major inhibitory molecule Nogo-A, the neurotrophin NT-3, and the NR2d regulatory subunits to enable NT-3-induced plasticity. Combining neurotrophins with other agents to improve their effectiveness in restoring function is consistent with the results of recent studies adopting this approach (Lu et al., 2004;Nothias et al., 2005;Arvanian et al., 2006b;Chen et al., 2008;Massey et al., 2008). Furthermore, the VLF contains reticulospinal and long propriospinal fibers (Reed et al., 2008) known to participate in the recovery of locomotor function in rats following thoracic injuries (Basso et al., 2002;Schucht et al., 2002;Arvanian et al., 2009). Therefore, development of a treatment aimed at the restoration of VLF projections is an important strategy in promoting recovery of function after thoracic injuries.
Electrophysiological experiments revealed the appearance of novel long-latency responses in L5 motoneurons from the ipsilateral VLF rostral to Hx in rats receiving the combination treatment (Fig. 2). The fact that these responses were preserved after re-transection of the spinal cord through the pre-existing lesion strongly suggests that these novel responses were not due to axons regenerating through the lesion, but were the result of the establishment of novel functional connections around the Hx. Although the increase in branching from white-matter fibers observed in the tracing studies is suggestive of new connections being responsible for the detour, we cannot rule out a contribution from already existing subliminal connections or 'silent' synapses (Kerchner & Nicoll, 2008) that were strengthened and became visible after the treatment (Wall, 1988).
Previous double-labeling studies in intact adult rats using tracers injected into the lumbar cord and VLF have identified a population of cervical neurons that cross to the contralateral VLF and recross to terminate in the ipsilateral upper lumbar cord (Reed et al., 2008). However, the absence of electrophysiological responses observed in control hemisected preparations suggests very little functional connectivity to L5 motoneurons mediated by propriospinal fibers crossing above and below the Hx (Fig. 4). The propriospinal fibers studied here apparently did not sprout below the lesion spontaneously, as indicated by the virtual absence of midline-recrossing fibers below the Hx in the absence of treatments. However, after the full combination treatment the number of fibers recrossing caudal to Hx increased substantially (Fig. 4).
The mild behavioral effects of the combination treatment occurred on a background of robust spontaneous recovery of locomotor function observed after thoracic Hx in rodents (Courtine et al., 2008;Arvanian et al., 2009); this spontaneous recovery makes further improvements difficult to detect. More challenging tests such as narrowing beam, horizontal ladder and the swimming symmetry revealed minor yet significant improvement of motor function in rats with the full combination treatment (Fig. 5), with no change in nociceptive function (Fig. 6). More robust recovery may depend on strengthening the synaptic connectivity from the descending fiber systems on the hemisected side to neurons responsible for the detour.
Together these studies suggest that the combination treatment produced larger effects electrophysiologically, anatomically and behaviorally than components tested separately or in pairs. In the electrophysiology where all possible combinations were tested with corresponding controls, the ability of these agents to produce a detour is clear. In the case of the anatomy and behavior, the full combination treatment elicited more sprouting or functional recovery than any of the treatments tested. However, because not all combinations were studied anatomically and behaviorally, and because the behavioral recovery may not parallel the electrophysiological recovery, we remain cautious about making conclusions concerning the ability of these treatments to promote recovery of behavior. Another caveat is the possibility of small differences in tissue sparing among the different preparations (Fig. 1B). However, the uniform differences in plasticity at all levels between the treatment groups, and the uniformity of the lesion size, make it very unlikely that systematic differences in tissue sparing were a major factor determining the findings reported here.
How does the combination treatment produce the detour? The requirement for Nogo-A specific antibody and NT-3 and NR2d suggests that detour formation required sprouting or growth of axons as well as an increase in synaptic efficacy. Although our present results do not reveal the location of the novel connections, we believe that they are distributed throughout the cord but are probably most numerous or strongest close to the lesion site where the concentration of the exogenous agents is highest.
NMDA receptors on motoneurons become functional in the prenatal period (Ziskind-Conhaim, 1990;Kalb & Hockfield, 1992), but they suffer a decline in function during the second postnatal week due to Mg 2+ blockade (Arvanian et al., 2004). We previously found that restoring NMDA receptor function by adding back the NR2d subunit of the NMDA receptor using an HSV viral construct enabled NT-3 to induce NMDA receptor-dependent potentiation of VLF synaptic transmission (Arvanian et al., 2004). When combined with NT-3, the NR2d subunit induced the appearance of synaptic responses in motoneurons from damaged VLF axons (Arvanian et al., 2006c).
Functional 'detour' in the hemisected spinal cord 1265 These results suggest that activity of NMDA receptors in the target neurons might be an essential factor required for growing axons to establish glutamatergic synaptic contacts upon them. Although our studies were in motoneurons, it seems likely that novel connections on interneurons are important for establishing the connections to motoneurons described here.
Our current results demonstrate that combinatorial treatment with NT-3 and NR2d resulted in VLF connections in approximately 33% of the motoneurons in injured adult rats. Similar treatments with NT-3 and NR2d in contused or staggered double-hemisected neonatal rats resulted in recovery of some connectivity to virtually all motoneurons (Arvanian et al., 2006c). One possible explanation for the limited efficacy of the treatment with NT-3 and NR2d in adult rats is the age-related development of myelin-associated neurite growth inhibition in the spinal cord. The localization of Nogo-A in oligodendrocytes, where expression starts at a relatively late developmental stage (Huber et al., 2002;Taketomi et al., 2002), fits well with its role as an age-dependent myelin-associated inhibitor of regenerative fiber growth in adult mammals. Here we demonstrate that additive treatment with NT-3 and HSV-NR2d in adult rats was sufficient to form connections via conduction around the Hx only when combined with anti-Nogo-A antibody. Considering that HSV-1-mediated NR2d expression lasts 1-2 weeks and Nogo-Ab delivery lasts 2 weeks, we hypothesize that they play a role in initiating the establishment of polysynaptic connections observed 7-12 weeks post-injury.
The establishment of connections to motoneurons via the detour is supported by anatomical experiments that indicate an increase in the number of branches given off by propriospinal or supraspinal axons (either ascending or descending) in the contralateral white matter caudal to Hx. Growth of these branches to the ipsilesional side of the cord could provide access to the stimulating electrode above the lesion; similarly, the recrossing between the hemisection and L5 could provide access to ipsilesional motoneurons, perhaps via strengthening of polysynaptic connections. Branches of fibers ipsilateral to the Hx may also send branches to contact propriospinal neurons on the contralateral side which then recross below the Hx to contact L5 motoneurons either directly or via relays from short propriospinal interneurons (Courtine et al., 2008;Etlin et al., 2010). Axotomized fibers descending from supraspinal centers (Reed et al., 2008) including the corticospinal tract and serotonergic raphe spinal fibers known to be influenced by anti-Nogo (Liebscher et al., 2005;Müllner et al., 2008) could also play a role in re-establishing the connectivity observed in these experiments.
A further consideration is the recent finding that thoracic Hx can reduce conduction through the uninjured contralateral white matter beginning 1-2 weeks after Hx as well as a decline in conduction velocity for axon segments across from the Hx (Arvanian et al., 2009). These changes were associated with decreased excitability in these axons (manifested by an increased rheobase), partial demyelination of the VLF and rubrospinal tract axons contralateral to the Hx (Hunanyan et al., 2011) and accumulation of chondroitin sulfate proteoglycans (CSPGs) in tissue surrounding the Hx (Hunanyan et al., 2010). Such changes undoubtedly contributed to the absence of any response through the region of injury in the controls; the treatments given at the time of the injury could have either prevented this decline in conduction or reversed it. In this context, NT-3 has been found to induce oligodendrocyte proliferation and myelination of regenerating axons in the contused adult rat spinal cord (McTigue et al., 1998), and the presence of Nogo has complex effects on oligodendrocyte differentiation which could affect myelination and impulse conduction (Pernet et al., 2008). The effects of NT-3 and anti-Nogo on myelination of regenerating fibers and conduction through the region contralateral to Hx, as well as the combination of this treatment with intraspinal digestion of CSPGs (Fawcett, 2009), remain to be determined in the current Hx model.
In conclusion, these results demonstrate that combination treatments using anti-Nogo, NT-3 and the NR2d subunit promote the establishment of a synaptic detour around a Hx. This pathway involves sprouting of white-matter fibers to the opposite side and may contribute to behavioral improvement. Future experiments should explore this combination approach to studying other spinal injuries, e.g. contusion, to determine whether recovery of function is improved under these experimental conditions.
|
v3-fos-license
|
2021-05-11T00:07:11.172Z
|
2021-01-01T00:00:00.000
|
234224516
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/20/e3sconf_emmft2020_02026.pdf",
"pdf_hash": "10cbc9bb149a82c872f84e6355481deee47abaf4",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46414",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "3b4a7bfb48800f5c2ab236bd42f96d60aa3c88c0",
"year": 2021
}
|
pes2o/s2orc
|
Effect of pesticides "Butylcaptax (Russia)" and "Droppa (Russia)" on respiration and oxidative phosphorylation of liver mitochondria of pregnant rats and their embryos
. The article provides information on the effect of pesticides butylcaptax and droppa on respiration and oxidative phosphorylation of liver mitochondria in rat rats and their embryos. It has been shown that butylcaptax and droppa reduce the oxidation of succinate and α-ketoglutarate in the V4*, V3 and V dnf states and drug conjugation in the liver mitochondria of pregnant animals and their embryos. The most significant inhibition of ADF formation in the respiratory chain of fatal and maternal liver mitochondria occurs via the NAD-dependent pathway, especially when poisoning with butylcaptax on the 19th day of pregnancy. Apparently, inhibition of ADF-stimulated respiration is associated with inhibition of electron transfer along the respiratory chain or is a consequence of inhibition of the transport of phosphate or ADF into mitochondria, which plays a key role in the mechanism of oxidative phosphorylation. A decrease in the conjugation of oxidation and phosphorylation does not create conditions for the accumulation of energy in an utilizable form - in the form of ADF. the of droppa on in the liver mitochondria of pregnant rats and their embryos.
Introduction
The accumulation of toxic chemicals in the body of pregnant women is extremely dangerous. It leads not only to chronic intoxication, but also to genetic shifts in the offspring [1].
In recent years, more and more supporters have been winning the hypothesis about the dominant role of energy deficiency in the development of diseases of chemical ethology [1,2]. Regardless of whether the damaging agents act directly on the mitochondrial membranes or this effect is mediated by intermediate factors, membranes react subtly to the action of chemical agents by changing the structural characteristics and activity of membrane-bound enzymes. In other words, the reaction and state of these organelles reflect the state of the entire cell. and functionally linked to the inner mitochondrial membrane and localized on its inner side [14].
Cytochrome with oxidase is a terminal enzyme of the respiratory chain. The reducing agent of cytochrome oxidase is cytochrome, located in the intermembrane space. This enzyme combines with oxygen and rapidly reduces oxygen to 2 water molecules [15]. The cytochrome oxidase subunit also interacts with phospholipids of mitochondrial membranes [16].
The vital activity of the cell supports the functioning of complex enzymatic assemblies associated with biomembranes. The state of membranes largely determines the biosynthesis of protein, nucleic acids, and lipids, the synthesis and degradation of high-energy substrates, the transport of substances and the utilization of intermediate metabolic products [17].
The effect of pesticides on the structural and functional state of mitochondria is one of the important areas of toxicology, which is due to the large role of these organelles in the energy supply of cell functioning. The inner membranes of mitochondria are called "conjugate" membranes, since they include enzymes for electron transfer and associated phosphorylation. "Conjugating" membranes are characterized by another feature -a strong dependence of the functioning of enzymes on the integrity of the membrane structure [15].
The high sensitivity of biomembranes to the action of external factors is primarily due to their complex structural organization, which ensures the direction and speed of a particular cellular function.
Fazalone has a pronounced membrane toxic effect. It causes swelling and destruction of mitochondrial membranes, reduction of cristae, which is a sign of the effect of this toxic chemical on the bioenergetic potential of the cell [11,17].
Some groups of pesticides are capable of inhibiting the mitochondrial respiratory chain, uncoupling the processes of oxidative phosphorylation and thereby disrupting the energy supply of the liver tissue [17]. DDT and sevine reduce the intensity of oxidative phosphorylation, not only reducing oxygen uptake, but also uncoupling it. At the same time, sevine has a greater inhibitory effect when used as a substrate of succinic acid (phosphorylation decreases by 42%, respiration -by 20%), and DDT-ketoglutaric acid (phosphorylation decreases by 70%, respiration -by 38%). The authors suggest that the common in the mechanism of action of both pesticides is determined by their ability to dissolve in lipids.
Lipid-soluble substances of the aromatic structure increase the membrane permeability for protons and thus reduce the membrane potential, inhibit oxidative phosphorylation [6,10].
In connection with the above, we studied the effect of butylcaptax and droppa on the oxidative phosphorylation systems in the liver mitochondria of pregnant rats and their embryos.
Materials and methods
The objects of research were white Wistar female rats weighing 180-180 g. The inoculation of animals with butylcaptax and droppa at a dose of 1/10 LD50 was performed on the 3rd, 13th and 19th days of pregnancy intragastrically per os using a special probe for 5 days. To fertilize rats in the proestrus-estrus stage, they were placed overnight with males in a ratio of 3:1. The first day of pregnancy was considered the day of detection of sperm in vaginal smears. The animals were slaughtered on the 20th day of pregnancy, when the embryo reached a significant size, at the end of organogenesis. In the experiments, we used the mitochondria of the liver of embryos and the maternal organism.
Mitochondria were isolated by the method of differential centrifugation [16]. The rat was decapitated; the removed liver was placed in a beaker with a cooled isolation medium.
The rate of oxygen consumption by mitochondria was measured by the polarographic method on an LP-7 polarograph using a rotating electrode under standard conditions at 25°C. The ADF/0 and DK ratios were expressed according to Chance-Williams [10][11][12][13][14][15]. Succinate and α-ketoglutarate served as oxidation substrates.
Results and discussion
The experiment used mitochondria subjected to a single freeze-thaw. Enzymatic activities were expressed in μ moles of oxygen consumed in 1 min, calculated per 1 mg of mitochondrial protein.
Data on the effect of butylcaptax and droppa on respiration and oxidative phosphorylation in the mitochondria of the liver of embryos and the maternal organism on the 3rd, 13th and 19th days of pregnancy are presented in Figures 1 and 2.
Note: Respiratory rate in oxygen atoms/min x mg protein, P <0.05 Administration of butylcaptax to pregnant rats induces unidirectional changes in the functional parameters of liver mitochondria. In particular, in case of poisoning with this pesticide at a dose of 1/10 LD50 on the 3rd day of pregnancy, the respiration rate of the liver mitochondria of the maternal organism (in samples with succinate) in metabolic states V4*, V3 and Vdnf decreased by 5, 7, and 11%, respectively. In the V4 state, it practically did not change. As a result, the DK value, which was defined as the ratio of the respiration rates of mitochondria in metabolic states and V3, decreased by 10%. An insignificant decrease in the ADF/0 coefficient was also observed. In experiments with α-ketoglutarate under the same conditions, changes in respiration and oxidative phosphorylation of liver mitochondria were unidirectional. In particular, the respiratory rate decreased insignificantly in the V4* and V3 states, and in the V4 state this indicator practically did not differ from the control. As a result, DK decreased by 5%. In this case, the ADF/0 ratio remained at the control level.
Respiration of mitochondria in a disconnected state (Vdnf) decreased by 12%. Similar changes were observed in the respiratory system and oxidative phosphorylation in the mitochondria of the liver of embryos, in case of poisoning with butylcaptax on the 3rd day of development (Figures 3 and 4).
Note: Respiratory rate in oxygen atoms/min x mg protein, P <0.05 At the same time, the functional orientation of the metabolism of the respiratory chain of liver mitochondria in 21-day-old embryos of intact rats was characterized by low values of DK and ADF-stimulated respiration. Breathing in the V4* and V4 states in terms of chance did not differ from these parameters of the maternal organism. Other authors also reported low DK values during the oxidation of succinate and other substrates in the mitochondria of embryos [16]. In the mitochondria of embryonic tissues, along with the function of energy supply to the cell, the plastic function of redox processes that produce hydrogen and substrates of synthetic reactions is of great importance. Such coordination of the main and alternative functions of biological oxidation is achieved by weakening the functions of free and phosphorylating oxidation, which results in a decrease in DK [17].
In our studies, when rats were poisoned with butylcaptax on the 3rd day of pregnancy in the mitochondria of the liver of embryos, an insignificant decrease in the oxidation of succinate in metabolic states V3 and Vdnf (by 4 and 6%), and the respiratory rate in state V4 increased (by 7%). As a result, the indicators of DK and ADF/0 decreased (by 11 and 13%, respectively).
In experiments with α-ketoglutarate, the respiration rate in all metabolic states (V4*, V 3, V4, and Vdnf) decreased by 6-12%. At the same time, DK and ADF/0 coefficient remained at the control level.
In case of poisoning with butylcaptax on the 13th day of pregnancy, the rate of succinate oxidation in the liver mitochondria of rats in the V4* state decreased by 14%, in the V3 state -by 14%, and V4 -by 15%. The value did not change significantly, which reduced the DK by 26% and the ADF/0 ratio by 10%.
In experiments with α-ketoglutarate during the same study period, there was a slight decrease in the V3 value (by 18%) compared with the control, as a result of which DK decreased by 25% and the ADF/0 ratio by 12%.
When rats were poisoned on the 13th day of pregnancy in the mitochondria of the embryonic liver, the rate of succinate oxidation in the V3 and Vdnf states decreased by 12 and 8%: the respiration rate in the V4* and V4 states did not change, which, in turn, led to a decrease in DK by 16% and ADF/0 ratio by 20%.
However, when an NAD-dependent substrate was used in the same period, a pronounced inhibition of respiration in the metabolic state V3 was noted (by 22%). The rate of α-ketoglutarate oxidation in the V4*, V4, and Vdnf states did not differ significantly from the control. Due to these changes, DK decreased by 20%, ADF/0 -by 22%.
The toxic effect of butylcaptax on the bioenergetic potential of the liver mitochondria of the maternal organism and the embryo was manifested in case of poisoning on the 19th day of pregnancy. In particular, in the mitochondria of rat liver, there was a decrease in the oxidation of succinate and α-ketoglutarate in metabolic states V4*, V3, and Vdnf, respectively, by 19, 22, and 18%; by 27, 18 and 20%. The metabolic rate V4 did not practically differ from the control. As a result of such changes, the conjugation of mitochondrial preparations, assessed by DK and ADF/0, decreased with succinate by 26 and 10%, with a-ketoglutarate by 25 and 12%.
Somewhat different changes were observed in the respiratory system and oxidative phosphorylation in the mitochondria of the liver of embryos during poisoning with butylcaptax on the 19th day of pregnancy. When succinate was used as an oxidation substrate, the rate of phosphorylation oxidation in the V3 state decreased by 15%, Vdnf -by 10. At the same time, the DK value decreased by 20% and the ADF/0 ratio by 32%. The level of mitochondrial respiration at rest (V4*) and after depletion of ADF (V4) did not differ from the control.
A similar picture was observed for the NAD-dependent oxidation pathway. In particular, in media with α-ketoglutarate in the metabolic state V3, mitochondrial respiration decreased by 34%. In states V4*, V4, and Vdnf, it did not differ significantly from the control. At the same time, the DK indices and the ADF/0 ratio decreased by 37 and 35%, respectively.
Inhibition of ADF-stimulated respiration is apparently associated with inhibition of the respiratory chain and inhibition of the transport of phosphate or ADF, which plays a key role in the mechanism of oxidative phosphorylation, into mitochondria. A decrease in the conjugation of oxidation and phosphorylation, expressed in a decrease in the ADF/0 ratio, does not create conditions for the accumulation of energy in the utilized form -in the form of ADF.
Thus, poisoning with butylcaptax inhibits the transfer of electrons along the respiratory chain and significantly suppresses the associated process of oxidative phosphorylation in the liver mitochondria of the maternal organism and the embryo. The most profound violation was noted in case of poisoning on the 19th day of pregnancy.
In the next series of experiments, we investigated the effect of droppa on the functional parameters of the liver mitochondria of the maternal organism and the embryo, in case of poisoning on the 3rd, 13th and 19th days of pregnancy ( Figures 5-8).
Note: Respiratory rate in oxygen atoms/min x mg protein, P <0.05 In case of poisoning with this pesticide on the 3rd day of pregnancy, an insignificant decrease in the oxidation of succinate and α-ketoglutarate in the V3 state was observed in the mitochondria of the liver of the mother and the embryo. The rate of oxidation of substrates in other metabolic states, the DK index and the ADF/0 ratio did not differ from the control. In general, the state of energy metabolism in the mitochondria of the liver of the mother and the embryo, in such conditions, can be considered satisfactory.
Droppa showed the toxic effect on the 13th day of pregnancy. In the mitochondria of the rat liver, there was a decrease in the oxidation of succinate in the states V4*, V3, and Vdnf by 11 -13%. The DK value and the ADF/0 ratio did not change.
In experiments with α-ketoglutarate in this series of experiments, a decrease in respiration was also observed in metabolic states V4* (by 20%), V3 (by 11%), and Vdnf (by 10%). The respiration of rat liver mitochondria in the V4 state did not change, as a result of which DK decreased by 14%.
Note: Respiratory rate in oxygen atoms/min x mg protein, P <0.05 Similar changes were observed in the system of energy metabolism in the mitochondria of the liver of embryos with droppa poisoning on the 13th day of development. The rate of succinate oxidation decreased in all metabolic states (by 7 -12%). in experiments with αketoglutarate, this indicator decreased by 12 -22%. When succinate was used, DK and ADF/0 did not change, but in experiments with α-ketoglutarate they decreased by 11%.
When poisoned with droppa on the 19th day of pregnancy, its toxic effect increased markedly. Thus, the oxidation of succinate in rat liver mitochondria in metabolic states V4*, V3, V4, and Vdnf decreased by 16,17,13, and 16%, respectively. DK decreased by 6%, the ADF/0 ratio -by 7%. In experiments with an NAD-dependent substrate, droppa poisoning caused the most pronounced decrease in the V4*, V3 and Vdnf values (by 20%). The condition did not change significantly. As a result, the DK decreased by 30%, the ADF/0 coefficient -by 14%.
Unidirectional changes were observed in the respiratory system and oxidative phosphorylation in the liver mitochondria of embryos in case of droppa poisoning on the 19th day of development. At this time, droppa reduced the rate of succinate oxidation in metabolic states V4* (by 10%), V3 (by 19%), V4 (by 15%) and Vdnf (by 10%). At the same time, DK and the ADF/0 coefficient decreased by an average of 7%. When α-ketoglutarate was used under the same conditions, a pronounced inhibition of respiration in states V4*, V4, and Vdnf was observed by 16, 18, and 25%. The rate of phosphorylating respiration decreased by 33% compared to the control. As a result, the DK was suppressed by 18%, the ADF/0 coefficient -by 25%.
Note: Respiratory rate in oxygen atoms/min x mg protein, P <0.05 As evidenced by the above data, the action of butylcaptax and droppa is characterized by a decrease in the rate of electron transport along the respiratory chain, than with the most profound inhibition falls on the NAD-dependent segment of the respiratory chain of the mitochondria of the liver of the mother and the embryo. Poisoning of the maternal organism with butylcaptax and droppa causes uncoupling of oxidative phosphorylation, which, in turn, leads to disruption of the energy conversion of rat liver mitochondria and their embryos? These changes are most pronounced in case of poisoning on the 13th and 19th days of pregnancy, and they show a higher toxic effect.
Under the action of these pesticides, profound changes occur in the energy supply system of the liver mitochondria of both the mother and the embryo. The results obtained allow us to believe that in the pathogenesis of poisoning with butylcaptax and droppa, a certain role is played by disorders of oxidative-phosphorylating processes in the body.
Thus, butylcaptax and droppa reduce the oxidation of succinate and α-ketoglutarate in V4*, V3 and Vdnf states and drug conjugation in liver mitochondria of pregnant animals and their embryos. The most significant inhibition of ADF formation in the respiratory chain of fetal and maternal liver mitochondria occurs via the NAD-dependent pathway, especially when poisoning with butylcaptax on the 19th day of pregnancy. Apparently, inhibition of ADF-stimulated respiration is associated with inhibition of electron transfer along the respiratory chain or is a consequence of inhibition of the transport of phosphate or ADF into mitochondria, which plays a key role in the mechanism of oxidative phosphorylation. A decrease in the conjugation of oxidation and phosphorylation does not create conditions for the accumulation of energy in a utilizable form -in the form of ADF.
Changes in the respiration rate of intact mitochondria can be caused not only by changes in the number of respiratory carriers in the electron transport chain or by selective blocking of it, but also by the microenvironment of the respiratory chain components and the activity of substrate transfer systems across the inner mitochondrial membrane. Butylcaptax and droppa inhibit the rate of electron transfer and oxidative phospholation in the liver mitochondria of the mother and fetus. These disorders are most pronounced in case of poisoning with butylcaptax on the 19th day of pregnancy. In the following experiments, we investigated the effect of butylcaptax and droppa on the activity of oxidase systems of the mitochondrial membranes of the liver of pregnant rats and their embryos.
Considerable experimental material has been accumulated on the influence of various environmental factors and chemical preparations on the activity of membrane-bound enzymes and polyenzyme systems of mitochondria. The study of the effect of pesticides on the functioning of the mitochondrial respiratory chain is one of the most important tests used to decipher the primary mechanisms of intoxication.
To fully characterize the toxic effect of butylcaptax and droppa on mitochondrial membranes, in the next series of experiments, we studied the effect of these pesticides on the activity of oxidase systems of rat liver mitochondria and their embryos. In the experiments, we used preparations of mitochondria subjected to a single freezing and thawing.
Poisoning of animals with butylcaptax and droppa differently affects the activity of the oxidase systems of the liver mitochondria of the maternal organism and the embryo. In this case, the toxic effect of butylcaptax is more pronounced.
The activity of the NAD.H-oxidase system of the respiratory chain of rat liver mitochondria and their embryos decreased more significantly during all periods of the study than other enzyme systems. So, if during poisoning with butylcaptax on the 3rd day of pregnancy, the activity of this enzyme in the mitochondria of the mother's liver decreased by 16%, in the mitochondria of embryos -10%, then the activity of cytochrome-c-oxidaseby 11 and 10%, respectively, the activity of succinate oxidase -by 5 and 8%.
In case of poisoning with butylcaptax on the 13th and 19th days of pregnancy, the changes were more pronounced. In particular, in the mitochondria of the mother's liver, the activity of NAD-H-oxidase decreased by 26 and 40%, respectively, the activity of cytochrome-c-oxidase -by 15 and 20% succinate oxidase -by 11 and 14%. Similar changes were observed in the oxidase system of the embryonic liver mitochondria.
The activity of NAD-H-oxidase decreased by 27 -32%, following by cytochrome with oxidase by 16 -22%, succinate oxidase -by 6 -12%. Similar changes were observed in the polyenzyme system of rat liver mitochondria and their embryos after droppa poisoning, but they were less pronounced.
If the activity of NAD.H-oxidase in the liver mitochondria of the maternal organism under the influence of butylcaptax decreased by an average of 16 -40%, then under the influence of droppa -by 5 -25%, the activity of cytochrome with oxidase -by 11 -25% and 7 -20%, succinate oxidase -by 5 -14% and 8 -11%.
The level of NAD.H-oxidase, cytochrome-c-oxidase and succinate oxidase in the mitochondria of the liver of embryos with droppa poisoning on days 3, 13 and 19 also decreases. The deepest inhibition (by 29%) was observed in NAD.H-oxidase activity (Figures 9 and 10). Thus, the action of butylcaptax and droppa on the body of pregnant rats decreases the activity of the respiratory chain enzymes of the liver mitochondria of the mother and the embryo. The most significant inhibition occurs in the NAD.H-oxidase branch of the respiratory chain. Poisoning of rats with these toxic chemicals leads to rather profound disturbances in the oxidative phosphorylation system and in the electron transport chain in the mitochondria of the liver of the mother and embryo. In case of poisoning with butylcaptax, violations are more pronounced.
Conclusion
Butylcaptax and droppa inhibit the rate of electron transfer and oxidative phospholation in the liver mitochondria of the mother and fetus. These disorders are most pronounced in case of poisoning with butylcaptax on the 19th day of pregnancy.
The study of the state of the oxidase systems of the mitochondrial membranes of the liver of pregnant rats and embryos shows that butylcaptax and droppa reduce the activities of NAD.H-oxidase, succinate oxidase, and cytochrome with oxidase at all times of inoculation. The most strongly inhibited NAD.H-oxidase activity of mitochondria in pregnant rats and embryos in case of butylcaptax poisoning. Butylcaptax leads to a deeper inhibition of the rate of electron transfer in various fragments of the respiratory chain of liver mitochondria. The most profound inhibition is observed in the NAD.H-oxidase branch during poisoning with butylcaptax on the 19th day of pregnancy.
Thus, the pesticides butylcaptax and droppa cause ultrastructural and, therefore, functional changes in the subcellular components of hepatocytes in pregnant rats and embryos. These changes reduce the protective and adaptive capabilities of the whole organism.
|
v3-fos-license
|
2018-04-03T02:38:16.619Z
|
2016-11-30T00:00:00.000
|
3800518
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1111/cobi.12811",
"pdf_hash": "f69279bf24613f1aa2458f4ff9fb6c9756619fdc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46417",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "f69279bf24613f1aa2458f4ff9fb6c9756619fdc",
"year": 2016
}
|
pes2o/s2orc
|
Understanding conservationists’ perspectives on the new‐conservation debate
Abstract A vibrant debate about the future direction of biodiversity conservation centers on the merits of the so‐called new conservation. Proponents of the new conservation advocate a series of positions on key conservation ideas, such as the importance of human‐dominated landscapes and conservation's engagement with capitalism. These have been fiercely contested in a debate dominated by a few high‐profile individuals, and so far there has been no empirical exploration of existing perspectives on these issues among a wider community of conservationists. We used Q methodology to examine empirically perspectives on the new conservation held by attendees at the 2015 International Congress for Conservation Biology (ICCB). Although we identified a consensus on several key issues, 3 distinct positions emerged: in favor of conservation to benefit people but opposed to links with capitalism and corporations, in favor of biocentric approaches but with less emphasis on wilderness protection than prominent opponents of new conservation, and in favor of the published new conservation perspective but with less emphasis on increasing human well‐being as a goal of conservation. Our results revealed differences between the debate on the new conservation in the literature and views held within a wider, but still limited, conservation community and demonstrated the existence of at least one viewpoint (in favor of conservation to benefit people but opposed to links with capitalism and corporations) that is almost absent from the published debate. We hope the fuller understanding we present of the variety of views that exist but have not yet been heard, will improve the quality and tone of debates on the subject.
Introduction
"Conservation in the Anthropocene" ) triggered a vibrant, and often contentious, debate about the future of biodiversity conservation. This debate, over what has become known as the new conservation, has unfolded through a series of position and opinion pieces that are mostly either in favor of the new conservation or against it (Greenwald et al. 2013;Noss et al. 2013;Soulé 2013;Doak et al. 2014;Miller et al. 2014). Several pieces analyzed the nature and tone of the debate (Hunter et al. 2014;Tallis & Lubchenco 2014). Although it has extended into the broader conservation community, the debate's public manifestations have been "dominated by only a few voices, nearly all of them men's" (Tallis & Lubchenco 2014: 27), and no attempt has been made to describe views from a wider community of conservationists. This has led hundreds of signatories to back Tallis and Lubchenco's (2014) call for a new chapter in the debate based on a wider range of views.
Originally proposed in an essay for The Breakthrough Institute ) and further developed in later articles , the new conservation is based on a series of core principles and values (described by its authors as functional and normative postulates, respectively) for conservation in the 21st century (Table 1). The new conservation postulates are an attempt to update Soulé's (1985) foundational functional postulates for conservation. They draw on developments in the conservation sciences and react to what see as Soulé's damaging inattention to human well-being.
In response, authors who might be called traditional conservationists contend, inter alia, that new conservation exaggerates nature's resilience, that its embrace of economic growth ignores fundamental planetary limits, and that there are many almost-intact wildernesses worth saving, which are neglected by a greater focus on conserving human-dominated places (Jacquet 2013;Noss et al. 2013;Soulé 2013;Doak et al. 2014;Miller et al. 2014;Wilson 2016). Traditional conservationists also argue that most conservation already takes place in humandominated places. In contrast to assertion, Greenwald et al. (2013) argue that con-servation has long held concerns for human well-being, and this was mentioned in Soulé's (1985) seminal article.
The antagonism is partly because the debate on new conservation is not just about how conservation should be done but also about different ethical values that underpin why conservation should be done and for whom (Hunter et al. 2014). New conservation is more anthropocentric, emphasizing the benefits of nature to humans and prioritizing the emergent properties of ecosystems that provide these, such as stability and productivity. Traditional conservation is more biocentric, emphasizing the intrinsic value of nature and prioritizing issues of species diversity and extinction. These values are often implicit rather than explicit in key position papers (Hunter et al. 2014).
Conservation has a history of plural views driving different framings of what conservation is, and what it is for (Mace 2014), and these longer-running debates are reflected in the current new versus traditional conservation debate (Holmes 2015). There has been a long debate about whether poverty alleviation in conservation is a damaging distraction, an ethically justifiable addition to the mission of conservationists, or a vital tool to make conservation more effective (Roe 2008). Similarly, there have been disputes over whether true wilderness exists and whether it is a useful or harmful concept for conservation (Callicot & Nelson 1998). Conservationists variously advocate for and critique working with corporations and capitalism (Brockington & Duffy 2010). What is new in the new-conservation debate is the way these and other issues have been packaged into just 2 opposing positions on why, how, and what to conserve (Holmes 2015). Meanwhile, other relevant debates in conservation social science, such as those on biocultural diversity, remain absent.
One substantial body of social science literature emerging in recent years, which is particularly relevant to many key themes in the new conservation, is that on neoliberal conservation. This explores the increasing integration between conservation and capitalism, considering the mechanisms by which such integration has taken place (e.g., payments for ecosystem services, biodiversity offsetting, and ecotourism), the claims of synergies between conservation and capitalism that underpin these mechanisms, and the role of major conservation Functional postulate " 'pristine nature,' untouched by human influences, does not exist" "the fate of nature and that of people are deeply intertwined" "nature can be surprisingly resilient" "human communities can avoid the tragedy of the commons" "local conservation efforts are deeply connected to global forces" Normative postulate "conservation must occur within human-altered landscapes" "conservation will be a durable success only if people support conservation goals" "conservationists must work with corporations" "conservation must not infringe on human rights and must embrace the principles of fairness and gender equity" nongovernmental organizations (NGOs) in promoting such mechanisms (Igoe & Brockington 2007;Brockington & Duffy 2010). These claimed synergies are part of the new-conservation discourse, which warns against "scolding capitalism" ) and advocates working with corporations not as a "necessary evil" but because they "can be a positive force for conservation" (Kareiva & Marvier 2012: 967). The critical literature on neoliberal conservation originates from diverse authors, including political ecologists (Igoe & Brockington 2007), conservation biologists (McCauley 2015, and mixtures of the 2 (Redford & Adams 2009). It has direct relevance to the new-conservation debate, but explicit crossreferencing between the two is rare (but see Spash 2015).
We sought to expand the debate about new conservation beyond the voices of a few prominent individuals by empirically examining the range of positions that exist among a wider group of conservationists, sampled from an international conservation conference. Accordingly, we aimed to evaluate the extent to which a particular group of conservationists share the views espoused in the public debate or adopt more nuanced or contrasting positions.
Q Methodology
We used Q methodology to undertake a systematic analysis of the perspectives of conservation professionals attending the 2015 International Congress on Conservation Biology (ICCB) in France. This method is growing in popularity for examinations of structure and form within subjective opinions and discourses, and it has been increasingly applied to conservation research in recent years (e.g., Sandbrook et al. 2011;Cairns et al. 2014;Fisher & Brown 2014). It combines the qualitative study of perceptions with the statistical rigor of quantitative techniques (McKeown & Thomas 1998;Watts & Stenner 2012) and requires respondents to arrange statements drawn from the public discourse on the research topic onto a grid to reflect their views. The method is used to identify particular subjective positions, identified as factors, and how these are shared by people. It also enables the detailed analysis and comparison of the composition of these positions. The prevalence of positions in a population, which is the domain of conventional surveys, is not of concern with Q methodology. Accordingly, Q is designed for small numbers of participants and does not require a random sample (McKeown & Thomas 1998). Watts and Stenner (2012) provide a comprehensive explanation of Q methodology.
Q Statements
A Q study starts by defining statements. We identified potential statements from the peer-reviewed literature that introduces, critiques, and defends ideas associated with the new conservation (Supporting Information). To identify material to review, we started with the key articles that launched the new-conservation debate (e.g., ) and then used Google Scholar to identify all articles citing this work, discarding those that were clearly not relevant. We selected candidate Q statements from the articles covering the major themes of the new conservation literature. The Q statements must span the range of existing positions and be concise and clear, such that respondents can place them instinctively. We chose 38 statements from an initial list of 108 by eliminating redundant statements, the meaning of which was more effectively conveyed elsewhere. Some statements were rephrased for clarity or to reverse their meaning to give a balanced set of statements (called a Q set). We tested this set with 3 respondents (two academics working on conservation issues and a representative from an international conservation NGO). Minor alterations for clarity were undertaken following the pilot phase.
Recruiting Q Participants
Our respondents were delegates at the ICCB. This congress is the main international event of the Society for Conservation Biology (http://www.conbio.org/ AboutUs/). We chose attendees of this event to capture views on the new-conservation debate from a wider group of respondents than those who had previously contributed publicly to the debate. However, our respondents were likely to have read or heard about it because they are part of the conservation mainstream, including academics and practitioners from major NGOs. The ICCB is the largest academic conservation conference in the world. The 2015 conference attracted roughly 2000 delegates from about 100 countries, making it an ideal venue for our study. One plenary session was a debate between Peter Kareiva and ecological economist Clive Spash on the new conservation, an event that likely prompted delegates to think about these issues. The attendees at the ICCB, and correspondingly the data we gathered, did not span the entire breadth that may exist within conservation on these issues. Many key voices, such as indigenous groups and rural residents of the global South, are significantly underrepresented at such events. Nevertheless, sampling the conference delegates allowed us to meet our objective of surveying views from a wider group of conservationists than those who have dominated the public debate on the new conservation.
Our research team at ICCB was composed of all authors and 2 data-collection assistants. We carried out faceto-face interviews with attendees, during which the Q survey provided the main stimulus. Respondents were selected purposively, rather than following conventional inferential statistical sampling aims, in order to capture the widest possible range of views (Watts & Stenner 2012). Four aspects drove our recruitment: people with a range of seniority, from thought leaders to junior conservationists; people with a known and distinct position on the debate (e.g., those who presented a relevant conference paper or referred to the debate); people without a known position on the debate who revealed in an initial conversation that they had a position; and people of both genders and from different sectors (e.g., academic and practitioner) and geographic origins. The team met daily throughout the congress to discuss progress and develop strategies to target underrepresented groups or perspectives until we judged that a sufficiently wide range of viewpoints had been captured, which was when responses represented both the existing published positions and a range of other perspectives on the debate. We also ensured that our 4-fold recruitment objectives were achieved. Thirty Q sorts were completed (Table 2). Respondents were informed that their responses would be anonymized and were asked to represent their own views rather than those of their organization. Permission to conduct the survey was obtained in advance from Most like I think +4 Figure 1. The Q methodology grid used in the study.
Respondents were asked to allocate statements to cells reflecting their relative agreement with each statement.
the organizers of ICCB. This research was subject to the ethical clearance procedure for research with human subjects at the University of Leeds.
The Interviews
All interviews were conducted in a quiet place away from other people. After an initial explanation of the project and the method, respondents completed the Q survey, sorting the statements onto the grid (Fig. 1). We emphasized that the method measures the extent to which respondents agree with each statement relative to all the other statements, rather than gauging an absolute level of agreement. The grid and our instructions covered the range from most like I think to least like I think, and we encouraged respondents first to gather statements into three piles. Two of these represented statements at the ends of the salience continuum, whereas the third was for statements of lower or intermediate salience. Respondents were then asked to distribute statements onto the grid from these piles. During the interview, respondents were encouraged to explain the rationale behind their sorting. This yielded complementary qualitative data recorded in writing by the researchers. Where respondents had questions about statements, the researcher gave limited help to explain the meaning of the statement while aiming not to bias the respondent. Theory suggests that Q methodology grids should follow a normal distribution (Watts & Stenner 2012). Respondents were not constrained to follow the normal distribution shown on the grid but were encouraged to follow it as closely as possible. Rather than being a requirement of statistical analysis, this encourages respondents to prioritize statements, thereby revealing what is really salient to them (McKeown & Thomas 1998;Watts & Stenner 2012). Fifteen of the 30 respondents did not constrain their responses exactly to the normal distribution.
Q Analysis
The Q sorts were analyzed using PQMethod software. A Q analysis involves three statistical procedures applied sequentially: correlation, factor analysis (here centroid analysis), and computation of factor scores (Watts & Stenner 2012). We rotated 3 factors following criteria in Watts and Stenner (2012). We based this decision on our judgment of the quantitative results of the analysis and our qualitative interpretation derived from our understanding of the respondents and their views. We used a varimax analysis and PQMethod's statistical threshold to automatically flag respondent Q sorts to factors. Five respondents were not flagged for any 1 factor. Following the quantitative stages, the analysis becomes more interpretive of the factors and is understood through representative Q sorts generated for each factor during the analysis (which represent the common ordering of statements for Q sorts associated with this factor) ( Table 3). Table 3 was devised to help readers interpret differences between factors. We interpreted the factors themselves and the consensus statements, which did not distinguish between any pair of factors. We recognize interpretation in Q is somewhat subjective (Eden et al. 2005 we refer to qualitative interview data in the results section, it derives from a respondent belonging to the factor described.
Results
The Q statement numbers, normalized Q score for that statement for that factor, are in parentheses and distinguishing statements (ranked in a significantly different way in one or both other factors [Watts & Stenner 2012]) are marked with an asterisk.
Factor 1
Factor 1 was associated with 9 respondents and was primarily distinguished by scepticism about markets, corporations, and capitalism; strong relative disagreement was displayed that conservation should work with capitalism (17 * , −3). There was concern that economic rationales displace other motivations for conservation (28 * , 2) and lead to unintended consequences (25 * , 1). More generally, plural rationales were thought to strengthen conservation (26, −4). Corporations were not considered a positive force for conservation (18 * , −1), and their support was not considered essential (35 * , −3). As one respondent noted, corporations are "unlikely to fully support conservation objectives" (interview 9). There was relative disagreement that economic growth is the best way to promote human well-being (38, −2) and reform of global trade was considered necessary (31 * , 2). This factor conveyed strong concern with the environmental impact of the world's rich (6 * , 4) and less concern with overall population growth (19, 0) relative to factors 2 and 3. Associated respondents believed conservation should do no harm to poor people (36, 2) and should seek to improve the well-being of all humans (21 * , 1). These goals were higher priorities than conserving nature for nature's sake (4 * , 0) but slightly lower than conserving ecosystem processes (24, 3) and biodiversity (34, 2). This factor conveyed ambivalence about whether conservation can be successful only by benefiting the poor (3 * , 0). This factor consistently did not favor traditional wilderness-focused conservation and conveyed the sense that pristine nature does not exist (9, 3) and that humans are not separate from nature (1, −4).
This factor promoted the idea that ethical values (23 * , 4) are more important than science (13 * , 0) in setting goals. Several respondents opined that the goals themselves are ethical statements. One noted that "science should inform how you do things in conservation, but not necessarily the goals" (interview 18). Biological evidence was not considered the most important source of evidence (7, −1). Unlike other factors, factor 1 was characterized by the idea that conservation should reduce human's emotional separation with nature (22 * , 3).
Factor 2
Factor 2 was associated with nine respondents. The most salient statements of factor 2 related to the importance of conserving biodiversity (34 * , 4) and ecosystem processes (24, 4) as goals of conservation. The factor was distinctly biocentric, prioritizing nature for nature's sake (4 * , 3) and rejecting the idea that protecting nature for its own sake does not work (14, −3). Human well-being as a conservation goal was not a strong priority (21, 1), but this factor considered outcomes that mutually benefited nature and humans as often as possible (2 * , −4). Together, these 2 elements and the placement of statement 3 * (1), regarding an instrumental rationale for conservation providing benefits to local people, characterized human well-being as an important secondary objective of conservation. Factor 2 was pragmatic relative to an interest in plural rationales (26 * , −1), and public support for conservation was regarded as a priority (16, 3). The use of doom and gloom messages was strongly rejected (29, −3).
The placement of statements 15 and 32 showed that value in nature was considered to be everywhere and that conservation should take place in all landscapes (e.g. "agricultural landscapes can have a very high conservation value" [interview 6]). However, some areas were considered pristine (9 * , −2), a view that distinguished this factor. There was some interest in strictly protected areas (PAs) (10 * , 2). This factor was strongly scienceoriented in terms of goal setting (13 * , 3) and favored evidence from biological sciences (7 * , 1).
Factor 2 conveyed a perceived need for reductions in population growth to achieve conservation goals (19, 2), for instance "I know it's controversial, but people are causing the problems and there are too many of them" (interview 5), and some concern about the environmental impacts of the rich (6, 2). In terms of how associated respondents considered local people and poverty, there was lower concern about doing no harm (36 * , 0) and displacement of people by conservation action than in other factors (8 * , 0). Although in the qualitative data respondents highlighted the need for appropriate consultation and consent from local communities (interview 15) and the need to avoid displacement, they also thought there may be cases where displacement could improve people's well-being (interview 6).
Perspectives on economic arguments (25, 0; 28, 0), corporations (18 * , 1), trade (31 * , −1), and capitalism (17 * , −1) were not priorities within this factor. This was coupled with the qualitative sense from one respondent that they did not have enough understanding of these issues to support strong views (interview 5). There was also pragmatism reflected in the idea that conservation needed to work with capitalism, but as one respondent stated: "that doesn't mean [capitalism] doesn't need to be changed" (interview 5).
Factor 3
Factor 3 was associated with seven respondents and primarily distinguished by its relative optimism about corporations (18 * , 3) and capitalism (17 * , 1). Those aligned with this factor expressed relative disagreement that there is a risk of economic rationales displacing other motivations (28, −1) and neutrality about whether using economic arguments could lead to unintended consequences (25, 0). In the words of one respondent aligned with this factor, "Capitalism is not such a bad thing" (interview 29). Those aligned with this factor believed that reforming global trade is necessary (31 * , 1) and that human population growth should be reduced (19, 1), but their views on these issues lay between the other factors' positions. Respondents thought that impacts on nature do not grow in line with income (33 * , −2).
Those aligned with this factor held strong views about the impact of conservation on people, believing it should do no harm to the poor (36, 4) and should not displace people to make way for PAs (8 * , −3). The factor displayed more optimism than others about the contribution of economic growth to well-being (38 * , −1) and considered more strongly than others that conservation will only succeed if it benefits people (3 * , 2). One respondent said when considering the well-being statement (21), "No. The goal should be conservation" (interview 21). This factor displayed less optimism than others about the possibility of conservation mutually benefiting people and nature (2 * , 0). One respondent said "I don't believe in this win-win-win, everyone wins. No. Some people will lose" (interview 29).
Those aligned with this factor believed pristine nature untouched by people does not exist (9, 3). Perhaps as a consequence, they expressed strong relative disagreement that strict PAs are required to achieve conservation goals (10 * , −4). Biodiversity was slightly less of a priority for this factor than factor 2 (34, 3), and unlike the other factors, associated respondents did not see conserving nature for its own sake as a goal of conservation (4 * , −1) of think this strategy works (14 * , 1). The factor was positive about the role of science in goal setting (13 * , 2) and saw the need for more than just biological science evidence in conservation (7, −1). Unlike factor 1, here ethical values were not seen as important for goal setting (23 * , −1). As one respondent said, "maybe conservation has too many goals now" (interview 21).
Those aligned with this factor believed successful conservation requires broad public support (16, 2). They were fairly neutral on the need to reduce the emotional separation of people and nature (22 * , 0). They also believed strongly that plural rationales do not weaken conservation (26, −3). One respondent said that "the inability to see others' views, to see plurality of opinions and values is detrimental" (interview 23).
Consensus Statements
There was relative consensus that significant value exists in highly modified landscapes (15), whereas non-native species were generally thought to offer some conservation value (32). There was consensus in the weak relative disagreement with the idea that highlighting human domination of the planet may be used to justify further environmental damage (11). Consensus surrounded the idea that giving a voice to those affected by conservation actions improves conservation outcomes (30) and is an ethical imperative (37). There was consensus around a low salience ranking (+1 or 0) regarding whether conservation must benefit poor people as an ethical imperative (5) and relative disagreement with the proposition that human affection for nature grows in line with income (20). Relative consensus existed on the notion that conservation messages promoting anthropocentric rationales can be as effective as those emphasizing biocentric rationales (27). Finally, there was general agreement that maintaining biodiversity (34) and ecosystem processes (24) should be goals of conservation, but these did not meet the statistical criteria to be considered consensus statements.
Discussion
This article provides the first published evidence of what a wider group of conservationists who have not actively participated in the public debate about the new conservation think about the issues raised and positions put forward within that debate. Our results suggest the existence of at least 3 distinct ways of thinking about these issues. Two of these positions were recognizably related to the traditional and new-conservation positions described in the literature (factor 2 and factor 3, respectively), albeit with important distinctions. The third (factor 1) was strongly divergent from either of the positions described in the new-conservation literature and included elements more closely resembling the positions on market-based conservation found in the literature on neoliberal conservation. Below we offer descriptive labels for each factor. These are simplifications of the nuanced content of each factor, but they offer a useful shorthand to identify positions and facilitate further debate.
Factor 2 resembled the traditional conservation view most closely associated in this debate with the writing of Michael Soulé (2013;Miller et al. 2014), although with some important differences. As a result, we labeled it traditional conservation 2.0. Areas of overlap included a primarily biocentric motivation for conservation, a focus on conserving biodiversity and ecosystem processes, and a belief in the existence of pristine areas and in the value of biocentric arguments when communicating conservation. This factor placed a low priority on market-based mechanisms and economic arguments for conservation, Conservation Biology Volume 31, No. 2, 2017 which resembles arguments put forward opposing the new conservation (e.g., McCauley 2015). However, factor 2 diverged from the standard traditional conservation position described in the literature. In particular (and in line with factors 1 and 3), it promotes the conservation of biodiversity wherever it is found, including of non-native species and in highly modified landscapes, in contrast to the traditional conservation position that focuses strongly on pristine nature in strict PAs. This raises the question of whether the traditionalist position of authors such as Soulé (2013) and Wilson (2016) has relevance for many contemporary conservationists or represents an ultraorthodox view held by a small minority.
Factor 3 resembled the new-conservation position most closely associated with the writing of Peter Kareiva and Michelle Marvier , although again there were important differences. As such, we labeled it nearly new conservation. Areas of overlap included a generally optimistic view of market-based instruments in conservation, an interest in novel ecosystems, modified landscapes, and more pristine areas and a belief that science should play a strong role in conservation. Two areas of apparent distinction emerged between factor 3 and the standard newconservation positions. First, new-conservation literature tends to adopt a primarily anthropocentric rationale for conservation in which benefiting people is an important goal in itself, whereas factor 3 was more concerned about avoiding harm to people than actually increasing their well-being. This suggests factor 3 represented a more instrumental view of the importance of benefiting people as a means to an end rather than an end in itself. Second, factor 3 was fairly neutral on the importance of addressing a separation of people from nature, whereas Kareiva (2008Kareiva ( : 2758, a key architect of the new conservation earlier argued that this separation "may well be the world's greatest environmental threat." Although factors 2 and 3 mapped fairly neatly onto positions described in the existing new-conservation literature, factor 1 did not. It shared aspects of factor 3, including concern for biodiversity in modified and pristine landscapes and need to avoid harm to people. However, it strongly diverged from factor 3 on the role of corporations and market-based instruments in conservation; it was critical of both. As such, we labeled it market scepticism. The position described by this factor is perhaps most closely aligned with those contained within critical social science scholarship on so-called neoliberal conservation (e.g., Igoe & Brockington 2007;Brockington & Duffy 2010). There was also strong overlap with the position of Spash (2015) put forward in a recent article and presentation to the ICCB and with the social-instrumentalism position described by Matulis and Moyer (2016). These critical arguments are almost absent from the literature that ex-plicitly refers to the new-conservation debate, despite appearing in mainstream conservation publications (e.g., Redford & Adams 2009) and being commonplace in the literature and conferences of the conservation social science community, which has academic audiences in geography, anthropology, political science, and other disciplines.
Our results have 2 important implications for the newconservation debate and broader thinking on future directions for conservation. First, there are more than two perspectives on what conservation is, why it matters, and how to do it. Others have pointed out that the newconservation literature creates a false dichotomy (Tallis & Lubchenco 2014), and our results support this. Critics argue that the debate has been dominated by established and influential figures from a narrow demographic, rather than representing the broader demographic of conservation researchers and practitioners (Tallis & Lubchenco 2014), and has been conducted in an overly adversarial manner (Marris 2014). Our qualitative data support this claim and the dissatisfaction with the tone and nature of the debate. One respondent working for an international NGO stated, "the modus operandi of the loudest voices [in the new-conservation debate] is to provoke . . . It is a distraction from the real challenges the sector faces" (interview 23). Given that not all voices in conservation are present at the ICCB, particularly those of groups that have been historically marginalized in conservation debates, the range of opinions is undoubtedly even broader than what we captured.
Second, it is striking that we found a position (factor 1) that is almost completely absent from the newconservation literature. Nine of our respondents were associated with this perspective and a similar position was presented by Clive Spash, who received a standing ovation from large sections of the audience in a plenary debate at ICCB. This finding suggests there is a latent critical viewpoint on neoliberal conservation that is held by a large number of conservationists but not represented by the actions of most conservation organizations or the writing of scholars like Soulé, Kareiva, and Marvier. Previous Q-method studies show similar resistance among some conservationists to market-based conservation (Sandbrook et al. 2013a;Blanchard et al. 2016). Articles in mainstream conservation journals have critiqued the underlying premises of market-based conservation (Redford & Adams 2009;Spash 2015), often authored by critical conservation social scientists. If such views are widespread, then there may be a ready audience for critical conservation social science scholarship among the conservation community, adding further weight to previous calls to improve the communication of ideas between these groups (Sandbrook et al. 2013b). To discover the prevalence of the viewpoints we identified, further research could build on this study by using survey methods designed to produce inferential results, Conservation Biology Volume 31, No. 2, 2017 focusing in particular on the conservation practitioner and non-Anglophone communities that were less represented at the ICCB.
Conservation is many things to many people, and it is not surprising people do not agree about everything. Although divisions over the new conservation could be treated as an ecumenical matter (Marvier 2014), with different approaches more suitable in different contexts (Pearson 2016), there will be places where they will collide, and there will be important disagreements that are worth acknowledging and discussing (Sandbrook 2015). Matulis and Moyer (2016) argue that such "agonistic pluralism" is preferable to the "inclusive conservation" that others have called for (e.g. Tallis & Lubchenco 2014), which can stifle minority viewpoints. That said, we identified some important areas of consensus and shared ground among our respondents, such as a recognition of the value of modified habitats, the importance of conserving ecosystem processes, and the need to give a voice to local people. In what has often been an adversarial public debate, the existence of these points of agreement could provide platforms for constructive debate in the conservation community about areas of disagreement. Our findings provide a fuller and more nuanced understanding of the variety of views that exist. We hope this will improve the quality and tone of debates surrounding the future of conservation.
|
v3-fos-license
|
2018-04-03T00:26:09.803Z
|
1973-09-10T00:00:00.000
|
11897227
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/s0021-9258(19)43512-6",
"pdf_hash": "3182813716a490f8f3fe79ddf99d330b4bfdb1ef",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46424",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"sha1": "18cb8f69435b2d115076f0a3180e66be3d6dfebe",
"year": 1973
}
|
pes2o/s2orc
|
The Oxidation of Catechols by Reduced Flavins and Dehydrogenases
Abstract 1,2-Dihydroxybenzene-3,5-disulfonic acid (Tiron) has been investigated with regard to oxidation to the o-semiquinone form by both photochemical and enzymatic systems. The formation of the o-semiquinone radical can be recorded at pH 6.8 in the electron spin resonance spectrometer at room temperature. The kinetics of formation and decay of the radical has been determined, since it is relatively stable in solution. The parent catechol is not autoxidized to a significant extent below pH 7. The utility of Tiron as a model compound for catechol oxidation reactions lies in the stability of the o-semiquinone at lower pH values and the relative absence of side reactions or formation of highly oxidized pigments. The reaction between oxygen-reductant complexes and this catechol may be diagnostic of the formation of such intermediates where superoxide dismutase fails to inhibit effectively oxygen-dependent electron transfer. It has been previously reported (Massey, V., Palmer, G., and Ballou, D. (1971) in Flavins and Flavoproteins (Kamin, H., ed) p. 349, University Park Press, Baltimore) that in the photochemical system, reduced flavin mononucleotide reacts with oxygen to form a compound which then may dissociate to yield the flavin semiquinone and free superoxide anion. The equilibrium position for dissociation of the reduced flavin-oxygen compound may be shifted toward free superoxide anion at higher pH values. The flavin radical is shown to be unreactive with Tiron at pH 6.8. The disulfonated catechol which was used in the present work is chemically similar to catecholamines and other catechols of biological origin in that metal-catalyzed autoxidation to the o-semiquinone takes place readily above pH 8.5. Evidence is presented indicating that catechols and ferricytochrome c can react directly with the reduced flavin-oxygen compound by a pathway which is not susceptible to inhibition by superoxide dismutase. In contrast, the oxidation of Tiron by iron-flavoproteins is completely inhibited by superoxide dismutase. Thus, in the latter case, superoxide anion must be released from the enzyme active site before reacting with catechols or cytochrome c.
SUMMARY 1,2-Dihydroxybenzene-3,Sdisulfonic acid (Tiron) has been investigated with regard to oxidation to the o-semiquinone form by both photochemical and enzymatic systems. The formation of the o-semiquinone radical can be recorded at pH 6.8 in the electron spin resonance spectrometer at room temperature.
The kinetics of formation and decay of the radical has been determined, since it is relatively stable in solution.
The parent catechol is not autoxidized to a significant extent below pH 7. The utility of Tiron as a model compound for catechol oxidation reactions lies in the stability of the o-semiquinone at lower pH values and the relative absence of side reactions or formation of highly oxidized pigments.
The reaction between oxygen-reductant complexes and this catechol may be diagnostic of the formation of such intermediates where superoxide dismutase fails to inhibit effectively oxygen-dependent electron transfer. It has been previously reported (MASSEY, V., PALMER, G., AND BALLOU, D. (1971) in Flauins and Flauoproteins (KAMIN, H., ed) p. 349, University Park Press, Baltimore) that in the photochemical system, reduced flavin mononucleotide reacts with oxygen to form a compound which then may dissociate to yield the flavin semiquinone and free superoxide anion.
The equilibrium position for dissociation of the reduced flavin-oxygen compound may be shifted toward free superoxide anion at higher pH values. The flavin radical is shown to be unreactive with Tiron at pH 6.8. The disulfonated catechol which was used in the present work is chemically similar to catecholamines and other catechols of biological origin in that metal-catalyzed autoxidation to the o-semiquinone takes place readily above pH 8.5. Evidence is presented indicating that catechols and ferricytochrome c can react directly with the reduced flavin-oxygen compound by a pathway which is not susceptible to inhibition by superoxide dismutase.
In contrast, the oxidation of Tiron by iron-flavoproteins is completely inhibited by superoxide dismutase.
Thus, in the latter case, superoxide anion must be released from the enzyme active site before reacting with catechols or cytochrome c.
* This paper is Contribution
No. 745 of the Chemistry and Biology Research Institute, Canada Department of Agriculture.
Previous investigations into the role of o-semiquinone forms of biologically important catechols have been hampered by the relative instability of these free radicals in the physiological pI-1 range. Although many catechols form stable semiquinones at pH 8.5 and above (I, 2), the observation of the electron spin resonance spectra of these compounds at neutral pH is difficult because of the short lifetime of the radicals in aqueous media. For this reason direct determinations of the kinetics of formation and decay of o-semiquinones in photochemical or enzymatic systems has been limited to flow mixing systems constructed within the cavity of an ESR spectrometer (3,4).
Two different approaches may overcome some of the difficulties in the direct observation of the formation of unstable o-semiquinones.
Copper-proteins such as ccruloplasmin (5) and superoxide dismutase (6) may oxidize catechols and under some conditions stabilize the semiquinones. Vie reaction of norepinephrine or caffeic acid with superoxidc dismutase isolated from Neurospora crassa yields appreciable concentrations of the o-semiquinone at pH 7.5 (6). Enzyme-bound copper is reduced concomitantly with catechol oxidation. However, in these casts the spectrum of the free radical product appears only at 77" K. The oxidation product of 1,2-dihydroxybenzene-3,5-disulfonate (Tiron) is much more stable than the products of oxidation of the naturally occurring catechols.
In the presence of the copperprotein or other oxidizing agents such as silver oxide (Ag20), Tiron yields a room temperature-observable o-semiquinone. Sulfonation of the aromatic nucleus of pyrocatcchol renders the free radical relatively stable near pH 7.
Because of indirect evidence for the oxidation of catcchols, particularly of 1,2-dihydroxybenzene-3,5-disulfonate, by supcroxide anion (7-9), we have employed liron as a chemical trapping agent for univalently reduced oxygen in photochcmical and enzymatic systems. Tiron o-semiquinonc was proposed to bc a l-electron oxidant for ferrocytochrome c (7,8). Direct evidence for the selective reactivity of the catechol radical with ferrocytochrome c is reported.
As shown below, the formation and dismutation of Tiron o-semiquinone can be follow-cd by ESR spectroscopy at 25" in the physiological ~1-1 range. Moreover, since the kinetics of appearance of the free radical can be recorded directly, it can be determined whether the o-semiquinone is formed initially or whether it is the product of a reaction bctwecn the original catechol and the corresponding o-quinone.
6055
The exact role of superoxide anion in the oxidation and autoxi-a compound consisting of reduced FMN and oxygen is formed dation of catechols at neutral pH is not completely known.
initially followed by the production of FMN semiquinone and Above pH 8.5 it is clear that superoxide anion plays a major role free superoxide anion (13). These reactions are summarized in the metal-catalyzed autoxidation process because the reaction under "Discussion." is severely inhibited by superoxide dismutase (9). However, at ESR spectra and kinetic data were obtained with t'he Varian lower pH, superoxide dismutase fails to inhibit the reaction at E-3 ESR spectrometer.
A quartz flat cell having an internal all (9). This finding indicates that free superoxide anion is not depth of 0.25 mm was fixed in place in the cavity and filled by responsible for autoxidation below pH 8.5. Oxygen in the means of a plastic capillary tube attached to the bottom. A singlet excited ('A) st,ate conceivably might be involved in the sealing, disposable l-ml syringe attached to the top of the cell oxidation of Tiron.
However, recent reports suggest that this was used to introduce the sample solutions into the cell or expel species also may react with superoxide dismutase (10, 11). them from it. The spectra were recorded as a plot of the first Although superoxide anion can be observed directly by derivative of microwave absorption against the magnetic field. trapping it at low temperature in the ESR spectrometer (12), the For kinetic studies, the magnetic field and the microwave freformation of this species at higher temperatures cannot be quency were held constant while the signal height was monitored. followed readily in photochemical or enzyme-catalyzed systems Photochemical reactions were carried out in an ESR cavity at neutral pH. Tiron can act as an indicator for superoxide which had the front cover removed for irradiation. When the generation in these systems because it apparently reacts rapidly reaction was to be carried out in the absence of oxygen, the with superoxide anion to give the o-semiquinone.
Autoxidation mixture was purged with prepurified helium or nitrogen before of the catechol is eliminated as a complicating side reaction be-being placed in an oxygen-free cell. This procedure gave a final cause at, pH 6.8 the o-semiquinone is not formed in the absence of oxygen concentration of less than 2 pM, as determined with a a superoxide-generating system. Fundamental differences be-Clark oxygen electrode. The reactions were started by switchtween the enzymatic and photochemical processes as determined ing on the 60-watt tungsten lamp, which was located 3 cm from by this method are reported below.
the cell between the poles of the spectrometer magnet. The primary site of formation of superoxide anion by metallo-For enzyme-catalyzed reactions a mixture lacking only the flavoproteins is not entirely clear (13,14). The catechol probe enzyme was placed in the flat cell and the ESR spectrometer might be expected to give some information as to the mode of the was tuned. The reaction was initiated by pumping the bufferunivalent reduction of oxygen by the flavin or iron-sulfur com-substrate mixture out of the flat cell n-ith the sealed syringe and ponents of enzymes such as xanthine oxidase and dihydroorotate into an aliquot of the enzyme which was held in a test tube (10 dehydrogenase. cm X 9 mm). The solution was then reintroduced into the flat cell. If air bubbles were avoided, the spectrometer imme- MATERIALS AND METHODS diately reassumed the tuned conditions. In this way the rate Dihydroorotate dehydrogenase was purified from cells of Zymobacterium aroticum. The enzyme was preactivated by dialysis against 2 mM cysteine as part of the purification procedure and was crystallized by the method of Aleman and Handler (15). Reaction of this activated enzyme preparation with dihydroorotate is possible in the absence of sulfhydryl-cont'aining reagents (16). Xanthine oxidase purified from buttermilk was obtained as an active ammonium sulfate suspension from Sigma Chemical Co. Superoxide dismutase was prepared from bovine erythrocytes by the method of McCord and Fridovich (17). The electrophoretically homogeneous enzyme had an activity of 1300 enzyme units per mg of protein when assayed by a continuous recording photochemical procedure described previously (6). An enzyme unit defined by this assay method is equivalent, to 2.5 enzyme units in the assay method of McCord and Fridovich (17). Dithionite-free ferrocytochrome c was prepared by a previously published method (8). FMN, catecholamines, and other biochemicals were obtained from Sigma. Tiron was supplied by Mann Research Laboratories, Inc. Double dis-of change of the signal height could be monitored continuously from about 4 s after the actual initiation of the reaction. At the low spin concentrations encountered in this work, interactions between paramagnetic centers would be expected to be negligible in solution.
The kinetic course of semiquinone formation may be assessed from plots of signal height against time (e.g. Reference 19), since relative signal widths should remain constant throughout a kinetic experiment. Although catecholderived radicals exhibit hyperfinc splitting patterns characteristic of interactions of the unpaired electron with protons of the aromatic ring, overmodulation may be used to broaden the signal and hence simplify the setting of the magnetic field at the exact position of maximal signal intensity.
Photochemical
Oxidation of Tiron- Fig. 1 shoTI-s the time course of the appearance of the ESR signal of the Tiron semiquinone at tu-o modulation amplitudes. The recordings were made under similar conditions at pH 6.8. Overmodulation of tilled water and inorganic reagents having low heavy metal the semiquinone signal produced Spectrum A, whereas a lo-fold analyses were used in all enzyme preparations and experiments. lower modulation amplitude gave Spectrum B, in which the Oxygen was reduced univalently in two ways, photochemically hyperfine splitting is well resolved. Tiron semiquinone yields (13, 18) and enzymatically.
Cuvettes or ESR cells containing the photochemical reaction mixture (see legends to figures) were irradiated with a 60-watt tungsten lamp which produced a total radiant energy flux of about 2 x lo5 ergs per cm2-s at the outer surface of the cell. Photoexcited FMN reacted with EDTA to give fully reduced FMN (FMNHJ and oxidation products of EDTA (18). The complex course of reoxidation of reduced FMN by molecular oxygen has been determined by Massey et al. (13). It is well established that the end products of this reaction are superoxide anion and hydrogen peroxide.
However, four identical resonances because the free electron interacts with the 2 non-equivalent ring protons (1). Each proton splits the resonance into a pair of signals because of the 2 protons' differing spatial positions.
Although the hyperfine interactions are not resolved in the overmodulated signal, the kinetics of the formation and decay of the Tiron semiquinone is not significantly altered.
The vert'ical arrows of Fig. 1 indicate initiation and reversal of the reaction by alternately starting and stopping irradiation of the FMN-EDTA-Tiron mixture in the cavity of the spec-trometer.
The traces appeared to be simple exponentials when log (&Js) was plotted against time, where S,,, represents maximum first derivative amplitude and s is the difference between S,,,,, and the amplitude at a given time.
Pseudo-first order rate constants for rise and fall of the signal are given in Table I. The modulation amplitude did not noticeably affect L1, as can be seen from an inspection of Curves 1 and S. Curve d of Fig. 1 also establishes the absolute requirement for molecular oxygen for the oxidation of Tiron to the o-semiquinone. The initial oxygen concentration was less than 2 PM.
On illumination (Jirst vertical arrow, Curve 8) residual oxygen was consumed and the small amount of o-semiquinone which was formed disappeared after a few minutes. Cessation of illumination produced no significant change. When the cell contents were mixed with air followed by re-illumination (third vertical arrow, Curve .2), the reaction proceeded normally as expected.
The kinetic course of the formation and the decay of the Tiron radical indicated that the o-semiquinone is the primary product of the oxidation of the catechol.
If the o-quinone were required to be formed first, a lag in the appearance of the semiquinone after illumination might be expected. This was never observed, indicating that a side reaction between the catechol and the quinone was probably not responsible for initial o-semiquinone formation.
Moreover, cessation of illumination caused a first order return of the radical concentration to a very low level. If accumulated o-quinone reacted with Tiron to maintain a relatively high concentration of semiquinone, the o-semiquinone signal would be expected to remain for a finite period of time after the exciting lamp had been turned off. Since no lag was observed in the decay of the Tiron radical, it must be assumed that this was a pseudo-first order process which involved a reaction between the o-semiquinone and other components of the reaction medium which were in excess.
Although the data of Fig. 1 showed that oxygen is required for the oxidation of Tiron, it remained to be determined whether the FMN-semiquinone was also a direct oxidant for the catechol. It was necessary to eliminate this possibility in order to rigorously implicate some reduced form of oxygen in the oxidation of Tiron. Spectrum 9 of Fig. 2 was obtained when a mixture containing FMN, Tiron, and EDTA was irradiated under anaerobic conditions. The FMN concentration was increased in this experiment in order to make possible the direct observation of the FMN radical which is formed in the oxygen-free system under these conditions.
The identity of Spectrum d with the FMN radical is established by the broad line width (20 G), by the requirement for large modulation amplitudes for observation of the signal, and by the lack of hyperfine splitting at any modulation amplitude. At a much lower modulation amplitude (Specfrum S) there is virtually no signal.
The FMN radical did not decay if the illuminating lamp was turned off, showing that the system was in equilibrium after 30 min. These results show that there is no Tiron radical in the illuminated anaerobic reaction mixture even though the FMN radical is present.
On equilibration of the mixture with oxygen the FXK radical completely disappeared.
On re-illumination a totally different signal appeared which was recorded as Spectrum /t at a lo-fold lower gain setting. This spectrum exhibits the expected hyperfine pattern of the Tiron radical.
Even at the higher modulation amplitude, the catechol radical is distinguishable from the FMN semiquinone because the signal from the former is narrower (7.3 G) and the peak-to-peak amplitude does not increase with increasing modulation amplitude, indicating saturation of the signal. From Fig. 2 it may be concluded that neither the FMN radical nor FMNH2 is an oxidant for Tiron, that no appreciable concentration of FMN radical exists in the aerobic mixture, and that oxidation of Tiron requires mediation by some species derived from molecular oxygen. to an opaque tube, below the spectrometer cavity, and 02 gas was introduced through a capillary. Bubbling of 02 was continued for 60 s before the mixture was returned to the darkened spectrometer cell.
No signals were observed until the illuminating lamp was activated. Spectrum 4 was recorded 5 min after illumination began. Spectrum 5 was obtained 10 min later at a much higher modulation amplitude.
Spectrometer parameters are listed below for the five spectra, which were all recorded with 50 milliwatts of microwave power at 27". The g values of the Tiron and FMN radicals are represented by gT and gF, respectively. 3. Reagents which have no effect on the course of the reactions. Ferrocytochrome c falls into the first category. In the concentration range 1 to 5 x lo+ M, reduced cytochrome c completely suppressed the results described in Fig. 1 by reacting with the Tiron radical (see below). Ferricytochrome c also eliminated o-semiquinone accumulation, probably by competing for a form of univalently reduced oxygen. The resulting conversion to ferrocytochrome c compounded this observed effect. When L-epinephrine or norepinephrine (10T5 M) was added to the Tiron-FMK reaction mixture, Tiron semiquinone formation was not observed.
This result indicates a direct reaction between these hormones and either the immediate oxidant of Tiron 6057 or the Tiron radical itself.
In reaction mixtures which contained the catecholamines, no free radicals could be detected at pH 6.8 because of conversion of the hormones to nonradical, oxidized forms as evidenced by the formation of light-absorbing pigments. In the partially inhibitory category were organic compounds that are known scavengers of activated oxygen intermediates, such as indole-acetate (lop4 M), which caused 50% inhibition of the photochemical reaction as followed by ESR. Surprisingly, superoxide dismutase was relatively ineffective in inhibiting the oxidation of Tiron in the photochemical system. Very high concentrations of the dismutase could not be used because of the oxidation of the catechol by the copper-protein (6). At or below a concentration of 3 X 10m5 M enzyme copper, inhibition of the initial rate of o-semiquinone production was never greater than 8% at pH 6.8. This lack of dismutase effect in spite of the absolute requirement for oxygen is considered further below.
The initial rate of oxidation of Tiron was not affected by catalytic concentrations of catalase or by 5 X 10e5 M horseradish peroxidase.
Hydrogen peroxide at a concentration of 5 X 10m2 M had no effect on the complete photochemical reaction or on Tiron alone at pH 6.8.
Oxidation of Ferrocytochrome c-The rapid oxidation of ferrocytochrome c was proportional to the concentration of the catechol semiquinone which was present in a reaction mixture. In this experiment, a phosphate buffer solution containing 0.018 M Tiron at pH 6.8 was allowed to react with 10 mg of AgzO for 5 min at 25". The solid oxidant then was removed by rapid filtration, which effectively stopped the formation of semiquinone. Aliquots of the mixture were added to 3.8 x 1OP M ferrocytochrome c in a spectrophotometer which was set to record absorbance at 550 nm. On addition of the oxidized Tiron an immediate oxidation of the cytochrome occurred in the presence of the Tiron radical.
This initial process was followed by a much slower oxidation reaction, which probably was due to a reaction between the yellow Tiron o-quinone and ferrocytochrome c at pH 6.8. The data correlating the magnitude of the immediate phase in the oxidation of ferrocytochrome c with the height of the ESR signal at specific times are presented in Table II. It is apparent that the rapid reaction phase requires appreciable concentrations of the Tiron radical. When the semiquinone concentration has dropped to a minimum level, virtually no fast reoxidation of ferrocytochrome c is observed. The Tiron semiquinone must be regarded as the oxidant of ferrocytochrome c in both photochemical (7) and enzymatic (8) systems.
Either Tiron or pyrocatechol may mediate the oxygen-dependent reoxidation of ferrocytochrome c (8). This reaction and the effect of superoxide dismutase on it are shown in Fig. 3. Although the figure indicates that the reaction requires FMN and Tiron, superoxide dismut'ase inhibits only about 50%. Previous data established the absolute requirement for oxygen (7).
Photochemical Oxidation of r-Epinephrine-Although (batecholamines produced no observable free radicals in the FMK-EDTA photochemical system, the oxygen-dependent formation of adrenochrome was followed spectrophotometrically at 480 nm (9). Irradiated reaction cuvettes were removed from the light beam periodically and the absorbance was determined. Table III shows that the initial rate of increase in absorbance at 480 nm increased rapidly with pH between pH 6.5 and pH 8.3. However, superoxide dismutase inhibited 80% of the reaction at the higher pH but only 50% at pH 6.5. Again the over-all oxidation requires molecular oxygen (14). were used to replace photoreduced FMN. Substrate-reduced dihydroorotate dehydrogenase and xanthine oxidase are known to form superoxide anion (8,20), and this univalently reduced species is considered to be the reductant for ferricytochrome c (21). Indirect evidence also indicated that dihydroorotate The lower portion of Fig. 4 shows the nonlinear relationship between the amount of enzyme and the initial rate of increase in the o-semiquinone signal. During the steady state period (0.5 to 2 min) the rate of formation of the radical due to the reaction of univalently reduced oxygen and Tiron balanced the rate of dismutation of the o-semiquinone or its further oxidation. When the source of reducing equivalents (dihydroorotate) neared exhaustion, the latter reactions predominated and the strength of the o-semiquinone signal decreased very slowly. A xanthinexanthine oxidase system produced similar results, as shown by the solid square data points near Curves S and 4 of Fig. 4.
The enzymatic oxidation
of Tiron was completely eliminated by inclusion of 50 pg of bovine superoxide dismutase in the reaction mixture. Therefore, in the enzymatic reaction all of the electron flux takes place through freely diffusing superoxide anion even at pH 7.
Although the formation of the Tiron o-semiquinone would be expected to be complex, plots of time against the log of the ratio of maximum signal amplitude to the difference between this maximum signal and the signal amplitude at a given time yielded straight lines, indicating pseudo-first order behavior. The rate constants are given in Table I and were related to enzyme concentration.
The spectrometer was tuned with the parameters detailed below. The dihydroorotate dehydrogenase (DHOD) (1.2 nmoles of flavin per ml, Curves 1 and 2, or 0.24 nmole of flavin per ml, Curves 3 and 4) was then added. For similar experiments with xanthine oxidase, 0.67 mM xanthine replaced dihydroorotate in the substrate mixture and 0.2 mg of xanthine oxidase replaced dihydroorotate dehydrogermse. II, xanthine oxidase data. Spectrometer conditions were modulation amplitude, 16 G; microwave frequency, 9.449 GHz; microwave power, 100 milliwatts; receiver gain, 6.2 X 106; time constant, 1 s; and temperature, 25". This reaction can be completely inhibited by superoxide dismutase. For example, under the conditions of Fig. 4, L-epinephrine was oxidized to adrenochrome at an initial rate of 0.11 absorbance unit per min at 480 nm in the presence of dihydroorotate dehydrogenase when the catecholamine replaced Tiron in the reaction mixture.
Although no free radical could be observed at pH 7, the visible spectrum showed that the reaction was completely inhibited by 100 kg of superoxide dismutase. In contrast to the reaction with photochemically oxidized catechols, the enzymatic reaction displayed far greater sensitivity to inhibition by superoxide dismutase.
DISCUSSION
The utility of considering the reactions of a disulfonated catechol as models for the oxidation of biologically occurring catechols and catecholamines becomes apparent when the close similarity in chemical properties among Tiron, pyrocatechol, 1,2-dihydroxybenzoic acid, and caffeic acid are considered. All of the compounds autoxidize rapidly above pH 8.5 and give stabilized free radicals at 77" K with N. crassa superoxide dismutase at lower pH values (6). Tiron is aerobically oxidized by reduced FMN and by reduced iron-sulfur-containing flavoproteins, in analogy with the oxidation of epinephrine and norepinephrine by these systems. In all cases the reaction shows an absolute requirement for molecular oxygen. Massey et al. have reported evidence for a series of reactions to account for the complex reoxidation of photoreduced flavins (13). An essential feature of this mechanism is the formation of a covalent compound between reduced flavin and oxygen which can dissociate to yield free superoxide anion and flavin semiquinone, according to Reactions 1 and 2 (13).
R R
FHX-02 + 20; + 2H+ + 602 + 02 (7) While Reaction 5 would be completely inhibited by superoxide dismutase, Reaction 6 should not be, since the reduced flavinoxygen compound would not be expected to fit the active site of the dismutase.
The catechol molecule, however, could readily attack the compound in a bimolecular reaction. The fact that superoxide dismutase is ineffective in blocking the oxygen-dependent transfer of electrons from reduced flavin to Tiron supports Reaction 6 as the major mechanism in the photochemical oxidation of Tiron in the physiological pH range. The only other explanation for this phenomenon would be an inhibition of superoxide dismutase by Tiron.
xo evidence for t'his was found, although it is a difficult point to test because a high concentration of catechols ( > 1 InM) would competitively remove the substrate of the enzyme and also would reduce enzymebound copper (6).
Da Silva Araujo et al. (23) have pointed out that superoxide anion is usually reversibly generated within a pre-existing charge transfer complex between molecular oxygen and an electron donor.
o-Diphenols may play a dual role by reducing superoxide anion and transferring a proton to the resulting peroxide ion according to Reactions 5 and 6. Hence catechols could shift an unfavorable equilibrium (Reaction 2) between the electron transfer to oxygen and the back reaction.
Hydrogen iondependent shifts in this type of equilibrium could account for the effect of pH on the sensitivity of catechol oxidation to inhibition by superoxide dismutase.
At pH 6.8 Tiron is oxidized almost exclusively by the reduced FAIN-oxygen compound, and only a very small percentage of free superoxide anion is released into the reaction medium. L-Epinephrine and norepinephrine, in contrast, must react less rapidly with the osygen compound and thus allow about one-half of the reducing equivalents to be released as freely diffusing superoxide anion. This species also oxidizes these electron donors, but in this case superoxide dismutase may intervene in the electron transfer.
Since the catecholamines completely eliminate the accumulation of the Tiron radical it may be assumed that they also react directly with the Tiron radical.
The reactions of oxidized and reduced cytochrome c (cyt) which are of importance in the present work are as follows.
The versatile reactions of cytochrome c with both oxygen and catechol radicals suggest that this mobile electron carrier may have a protective function in living cells which to a degree supplements the protective action of superoxide dismutase (24). L-Epinephrine and to a lesser extent ferricytochrome c must be able to react directly with the reduced flavin-oxygen compound, because at pI-1 6.8 aerobic oxidation is only partially inhibited by excess superoxide dismutase. The transfer of electrons from univalently reduced oxygen to cytochrome c or from catechols to univalently reduced oxygen is distributed between two pathways, only one of which involves free superoxide anion.
However, the failure of superoxide dismutase to inhibit a reaction does not rule out the involvement of activated oxygen complexes which are not accessible to the catalytic centers of the enzyme. The enzyme gives no information on univalent electron flux through compounds containing the reductant and oxygen as shown in Reaction 6.
In the case of metalloflavoprotein-catalyzed generation of superoxide anion it is apparent that the anion must be released into the solvent before catechols can be oxidized or cytochrome c reduced.
If a complex or compound between univalently reduced oxygen and a prosthetic group of the enzyme is formed it is inaccessible to these reagents.
The fact that metalloflavoprotein dehydrogenases co-oxidize catechols through free superoxide anion does not necessarily indicate that superoxide is generated specifically at the iron sites of these enzymes.
In the case of flavins and metal-free flavoproteins the reactive site may be structured so that univalent electron acceptors or donors may react either with the reduced flavin-oxygen complex or with free superoxide anion. This difference might explain the reported differences in behavior of univalent electron flow between the two types of flavoproteins (14).
|
v3-fos-license
|
2020-04-02T09:17:56.973Z
|
2020-04-01T00:00:00.000
|
214785647
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/microorganisms8040502",
"pdf_hash": "91872688d9de1241e94aa2ea9a11fc175f6df2b3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46425",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "5b4dac756c4c1803390d06632c882ac04591d67d",
"year": 2020
}
|
pes2o/s2orc
|
Biochar and Rhizobacteria Amendments Improve Several Soil Properties and Bacterial Diversity.
In the current context, there is a growing interest in reducing the use of chemical fertilizers and pesticides to promote ecological agriculture. The use of biochar and plant growth-promoting rhizobacteria (PGPR) is an environmentally friendly alternative that can improve soil conditions and increase ecosystem productivity. However, the effects of biochar and PGPR amendments on forest plantations are not well known. The aim of this study is to investigate the effects of biochar and PGPR applications on soil nutrients and bacterial community. To achieve this goal, we applied amendments of (i) biochar at 20 t hm−2, (ii) PGPR at 5 × 1010 CFU mL−1, and (iii) biochar at 20 t hm−2 + PGPR at 5 × 1010 CFU mL−1 in a eucalyptus seedling plantation in Guangxi, China. Three months after applying the amendments, we collected six soil samples from each treatment and from control plots. From each soil sample, we analyzed several physicochemical properties (pH, electrical conductivity, total N, inorganic N, NO3−-N, NH4+-N, total P, total K, and soil water content), and we determined the bacterial community composition by sequencing the ribosomal 16S rRNA. Results indicated that co-application of biochar and PGPR amendments significantly decreased concentrations of soil total P and NH4+-N, whereas they increased NO3-N, total K, and soil water content. Biochar and PGPR treatments increased the richness and diversity of soil bacteria and the relative abundance of specific bacterial taxa such as Actinobacteria, Gemmatimonadetes, and Cyanobacteria. In general, the microbial composition was similar in the two treatments with PGPR. We also found that soil physicochemical properties had no significant influence on the soil composition of bacterial phyla, but soil NH4+-N was significantly related to the soil community composition of dominant bacterial genus. Thus, our findings suggest that biochar and PGPR amendments could be useful to maintain soil sustainability in eucalyptus plantations.
Introduction
Nowadays, there is a global challenge to find alternatives to reduce the massive use of chemical fertilizers and agrochemical products. In this sense, biochar and plant growth-promoting rhizobacteria (PGPR) are two eco-friendly alternatives that may be used to replace or reduce the use of these chemical products. Biochar has been reported as the product of high-temperature pyrolysis of organic matter in the absence or limited presence of oxygen. As a soil amendment, biochar has been shown to enhance soil quality, the efficiency of nutrient uptake by plants, and crop yield [1]. Biochar application leads to diversity and composition, and (iii) the relationships between soil physicochemical properties and soil bacterial community composition. We hypothesized that the co-application of biochar and PGPR would increase soil nutrient concentration more than that of individual application of biochar/PGPR, potentially increasing soil microbial diversity.
Experimental Site
The experimental field is located in Nanning, Guangxi, China (107 • 45' 108 • 51' E, 22 • 13' 23 • 32' N). The average annual temperature at the research site is 21.6 • C from the year 2005 to 2015. The average annual rainfall is approximately 1300 mm, with an average humidity of 79%. At this site, the soil is classified as acidic metabolic red soil, with a pH in the range of 4.5-5.5 and a soil organic matter content of 2%-3% [18]. We selected eucalyptus seedling plantations for this study as they are the main crops in our study site, Guangxi being the most important producer of eucalyptus for wood in south China since the 1970s [27].
Biochar and PGPR Characterization
We used biochar made from wheat (Triticum L.) straw, produced in a continuous carbonizer at 600°C for 3 h. The properties of biochar applied are shown in Table 1. Table 1. Basic properties of biochar in our research. (Fixed C: fixed carbon, Av. P: Olsen available phosphorus; Av. K: available potassium; Bulk: bulk density; SA: surface area; EC: electrical conductivity CEC: cation exchange capacity). We used Bacillus megaterium de Bary, which is N 2 -fixing bacilli and a plant-probiotic species. B. megaterium strain DU07 was isolated from the eucalyptus rhizosphere in solid lysogeny broth (LB) in Liangfengjiang National Forest Park, Guangxi, China on June 2011 and then stored at −80°C in an ultra-low temperature freezer for use. The strain was genotyped by sequencing part of the ribosomal 16S rRNA gene with the universal primers Y1 (5'-TGG CTC AGA ACG AAC GCT GGC GGC-3') and Y2 (5'-CCC ACT GCT GCC TCC CGT AGG AGT-3') by Shanghai Majorbio Bio-pharm Technology (Shangai, China). The record number of DU07 in GenBank at the NCBI (National Center for Biotechnology Information) was MK391000 [18].
Stored DU07 cells were cultured in fluid in LB at 28 • C under shaking at 120 r min −1 for 6 days for activation and were diluted to 5 × 10 10 CFU mL −1 with sterile water before application.
Experimental Design and Soil Sampling
On January 2018, we established 12 plots of 10 m × 10 m, systematically separated using 2 m buffer strips, and allocated 3 for each of the following treatments: (i) biochar, (ii) PGPR, (iii) biochar+PGPR, and (iv) control.
(i) The biochar treatment consisted of digging holes of 30 cm × 30 cm × 30 cm and planting Eucalyptus seedlings (36 plants per 10 m × 10 m plot), filling the hole with a mixture of the extracted soil plus 0.18 kg biochar (corresponding to 20 t hm −2 ). (ii) The PGPR treatment consisted of digging holes of 30 cm × 30 cm × 30 cm and planting Eucalyptus seedlings (36 plants per 10 m × 10 m plot), with the extracted soil inoculated with 2 mL of the logarithmic-phase liquid culture of B. megaterium strain DU07. (iii) The biochar+PGPR treatment consisted of digging holes of 30 cm × 30 cm × 30 cm and planting Eucalyptus seedlings (36 plants per 10 m × 10 m plot), filling the hole with a mixture of the extracted soil inoculated with 2 mL of the logarithmic-phase liquid culture of B. megaterium strain DU07 plus 0.18 kg biochar (corresponding to 20 t hm −2 ). (iv) In the controls, holes were refilled with soil.
The Eucalyptus seedlings used for plantation were Eucalyptus DH32-29, a clone of Eucalyptus urophylla S.T. Blake × E. grandis Hill ex Maiden. Seedlings were bare-root, with a mean height of 25 cm, obtained from the Dongmen tree farm in 2018 (Guangxi, China). Roots were trimmed before planting.
Three months after planting the seedlings, we collected a soil sample from each 10 m × 10 m plot (3 samples per treatment) from the top 0-30 cm of the soil. Fresh soil samples were used to determine bacterial community diversity, and air-dried soil samples were used to analyze soil nutrient contents.
Analysis of Soil Physicochemical Properties and Bacterial Community
From each soil sample, we determined gravimetric soil water content (SWC), soil pH (water: soil = 2.5:1) with a pH-4 (Yidian, PHSJ-3F, China), and soil electrical conductivity (EC) with an EC-3 meter (Leici, DDSJ-308F, China). Soil inorganic N (NH 4 + -N and NO 3 − -N) and total N (TN) were determined in a flow injection auto-analyzer (Technicon, AA3, Germany) following digestion with H 2 SO 4 /HClO 4 and NaHCO 3 extraction. Soil total P (TP) was determined by the microplate method, and soil total K was determined via combustion in a flame photometer (Shuangxu, FP6430, China). From each soil sample, we extracted microbial DNA using a E.Z.N.A. ® soil DNA Kit (Omega Bio-Tek, Norcross, GA, USA), according to the manufacturer's protocols. The final DNA concentration and purification were determined using a NanoDrop 2000 UV-vis spectrophotometer (Thermo Scientific, Wilmington, DE, USA), and DNA quality was checked using 1% agarose gel electrophoresis. The V3-V4 hypervariable regions of the bacterial 16S rRNA gene were amplified with primers, as shown in Table 2, using a thermocycler PCR system (GeneAmp 9700, Applied Biosystems, Foster City, CA, USA). The PCR reactions program is shown in Table 3. The resulting PCR products were extracted from a 2% agarose gel and further purified using an AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA) and quantified using QuantiFluor™-ST (Promega Madison, WI, USA), according to the manufacturer's protocol. Raw FASTQ files were demultiplexed, quality-filtered using Trimmomatic, and merged using FLASH, with the following criteria: (i) The reads were truncated at any site receiving an average quality score <20 over a 50 bp sliding window. (ii) Primers were exactly matched allowing 2 nucleotide mismatching, and reads containing ambiguous bases were removed. (iii) Sequences whose overlap was longer than 10 bp were merged according to their overlap sequence. Operational taxonomic units (OTUs) were clustered with 97% similarity cutoff using UPARSE (version 7.1 http://drive5.com/uparse/), and chimeric sequences were identified and removed using UCHIME. The taxonomy of each 16S rRNA gene sequence was analyzed using the RDP Classifier algorithm (http://rdp.cme.msu.edu/) against the Silva (SSU123) 16S rRNA database using a confidence threshold of 70%.
Data Analysis
We calculated the bacterial α-diversity based on OTUs (operational taxonomic units). We used Chao1 (Equation S1) and ACE (Equations S2-S5) to characterize the richness of the bacterial community and Simpson (Equation S6) index to characterize the diversity of the bacterial community [28][29][30].
The effects of PGPR and biochar amendments on soil nutrient contents and bacterial community diversity were evaluated using ANOVA in R (http://www.R-project.org/). Where principal effects were significant, we used pairwise Tukey's tests to determine significant differences among treatments. We conducted a regression analysis using the package Logistic Regression in R with statistical significance determined at α = 0.05. Redundancy analysis (RDA) and Monte-Carlo permutation tests were conducted using Canoco 5.0.
Effects of Biochar and PGPR on Soil Physicochemical Properties
Both biochar and biochar+PGPR treatments increased concentrations of soil NO 3 --N, inorganic N, electrical conductivity (EC), and soil water content (SWC), compared with the control (Table 4). B and PGPR significantly increased total N (TN), and soil total K (TK) was increased using PGPR and biochar+PGPR (Table 4). In contrast, soil total P (TP) and NH 4 + -N significantly decreased after B and/or PGPR treatments (Table 4). Additionally, we found significant differences in NO 3 --N, inorganic N, TN, TP, TK when comparing the co-application and separate application of biochar and PGPR (Table 4). Soil pH was not affected by biochar and PGPR treatments (Table 4).
Effects of Biochar and PGPR on Microbial Richness and Diversity Indices
A total of 138,676 optimized sequences were obtained from sequencing ( Table 5). The coverage index of soils amended with biochar and PGPR was between 98% and 99%, indicating that the dataset included all sequences between V2 and V3 regions and that sequence data volumes were reasonable.
The effects of biochar and PGPR on α-richness and α-diversity of bacteria based on OTUs are shown in Table 5. On the one hand, bacterial richness was positively affected by the co-application or separate application of biochar and PGPR, since all biochar and PGPR treatments significantly increased the ACE and Chao1 indices. On the other hand, biochar and biochar+PGPR treatments significantly (p < 0.05) increased the Simpson index in relation to the control. We also observed significant differences in the Simpson index between co-application and separate application of biochar and PGPR. PGPR significantly increased the bacterial diversity index relative to the co-application of PGPR and biochar, whereas the separate application of biochar showed the contrary trend. Table 4. Means and standard errors of soil nutrient contents amended with plant growth-promoting rhizobacteria (PGPR) and biochar. Different letters indicate significant differences at p < 0.05 among treatments and the control. NO 3 − -N: nitrate nitrogen; NH 4 + -N: ammonium nitrogen; IN: inorganic nitrogen; TN: total nitrogen; TP: total phosphorus; TK: total potassium; EC: electrical conductivity; SWC: soil water content.
Effects of Biochar and PGPR on Soil Bacterial Community Composition (Phylum Level)
The analysis based on the 16S rRNA showed that the main bacterial phyla in soil samples were Proteobacteria (25.60%), Chloroflexi (19.10%), Actinobacteria (17.57%), Acidobacteria (9.65%), Bacteroidetes (6.89%), Planctomycetes (5.36%), Gemmatimonadetes (3.81%), Firmicutes (2.55%), Armatimonadetes (1.34%), and a relatively small amount (5.98%) of Verrucomicrobia and Spirochaetae and unclassified bacterial flora (2.15%) (Figure 1). The bacterial composition of biochar and PGPR amended soils at the phylum level is shown in Figure 1. The relative abundance of Proteobacteria in biochar (0.29), PGPR (0.28), and biochar+PGPR (0.25) treatments was significantly (p < 0.01) lower than in the control (0.33). Significant differences between co-application and separate application of biochar and PGPR were also observed. A similar pattern occurred with Acidobacteria and Bacteroidetes, although we did not find significant differences between co-application and separate application of biochar and PGPR. On the contrary, the relative abundance of Actinobacteria was significantly (p < 0.001) higher in soils treated with biochar (0.46), PGPR (0.3), and biochar+PGPR (0.34) than in the control (0.28), and there were no significant differences between co-application and separate application of biochar and PGPR. A similar response was found for Gemmatimonadetes and Cyanobacteria. The abundance of Chloroflexi significantly (p < 0.001) increased in the PGPR treatment (0.15) compared to the control (0.12), but significantly decreased after biochar (0.09) and biochar+PGPR (0.01) treatments, and there were also significant differences between co-application and separate application of biochar and PGPR. The relative abundance of Firmicutes, Nitrospirae, and Verrucomicrobia after all or some of the biochar and PGPR was significantly higher than the control, and significant differences between co-application and separate application of biochar and PGPR were found.
Effects of Biochar and PGPR on Soil Bacterial Community Composition (Genus Level)
We show the relative abundances and community composition of the dominant bacterial genera in soil via cluster analysis in a heatmap ( Figure 2). The clustering result showed that biochar treatment was separately classified into a cluster (group 1), and control (group 2) was evidently separated from PGPR and biochar+PGPR treatments (group 3), indicating that the bacterial community of soils after PGPR amendments was significantly different than the bacterial community of soils after biochar treatment and the control. Soil bacteria genera were grouped into four clusters according to the abundance of each taxon ( Figure 2). The most abundant genera were grouped in Cluster 1, composed of genera from Micrococcaceae and Acidobacteria. Genera with intermediate-high abundance were included in Cluster 2, in which the main genera of bacteria were from Nocardioides and Anaerolineaceae and the genus Roseiflexus. Intermediate-low abundant genera were included in Cluster 3, in which the main genera were from the families Gemmatimonadaceae, Rhodospirillaceae, and Intrasporangiaceae, and the genus Streptomyces and Lysobacter. Low abundant genera were included in Cluster 4, in which the main bacteria were the genus Rhodococcus, Bacillus, Williamsia, and Sphingomonas.
Correlations between Soil Physicochemical Properties and Soil Bacterial Community Composition
The relationship between soil physicochemical properties and relative abundances of dominant bacterial was studied with redundancy analysis (RDA) at the phylum ( Figure 3) and at the genus level ( Figure 4). In general, the forward selection of RDA analyses showed that all physiochemical properties except NH 4 + -N affected soil bacterial community composition at the phylum level, whereas all soil properties expect NO 3 − -N affected soil bacterial community composition at the genus level, indicating differences in inorganic N preference among the different bacterial taxa. The RDA of soil physicochemical properties and relative abundances of dominant bacterial phyla (Figure 3) show that the first ordination axis was correlated with Cyanobacteria, Actinobacteria and Firmicutes, and inversely related to Acidobacteria and Bacteroidetes, explaining 63.50% of the total variability. The second ordination axis was strongly related to Gemmatimonadetes and Proteobacteria, explaining 17.51% of the variability. The RDA revealed some trends; for instance, the relative abundance of soil Gemmatimonadetes was associated with TK and NO 3 − -N concentrations, Cyanobacteria with The relationship between soil physicochemical properties and relative abundances of the dominant bacterial genera is shown in Figure 4. RDA revealed that the first ordination axis was strongly correlated with TK10, KD4-96, Nitrosomonas, and Elev-16S and inversely related to Micrococcus, Lysobacter, and Rhodocuccus, explaining 79.13% of the total variability. The second ordination axis was mainly associated with Nitrospira, Anaerolineaceae, and Rhodospirillaceae and inversely related to Cytophagaceae. RDA suggests that Nitrospira and Anaerolineaceae relative abundance is associated with TK content and inversely related to TP, TN, and SWC, whereas genera from the family Cytophagaceae showed the opposite pattern. Results of the Monte-Carlo permutation test indicated that soil NH 4 + -N was significantly related (pseudo-F = 6.
Effect of Biochar and PGPR on Soil Nutrient Content
This study determined the effect of PGPR and biochar on soil physicochemical properties. The increases in soil NO 3 --N and inorganic N after biochar and biochar+PGPR treatments agree with other studies, as biochar could potentially absorb NO 3 − -N through the positive charge on biochar surfaces [31]. Biochar amendment could also alter soil water holding capacity and cation exchange capacity because of its large porosity and specific surface area [5], which is in line with the increased soil EC and SWC in biochar and biochar+PGPR treatments. PGPR application leads to the increased organic matter degradation rate, and thus increased soil soluble N compounds due to the high C/N [32], and then improved soil macro-nutrient concentration such as nitrogen [33], which is in line with the increased TN concentration in the PGPR treatment. The higher porosity, cation exchange capacity, and sorption capacity of charcoal may result in the accumulation of nutritive cations and anions [34], which is consistent with the increased TN in the biochar treatment. Our results also indicated that major increases in TK occurred in PGPR treatment, which can be attributed to the increased potassium solubilization capacity of the soil microbes [35]. In general, soil NH 4 + -N and TP concentrations significantly decreased in all biochar and PGPR treatments, suggesting that biochar application decreased the degradation of soil organic matter or biochar absorbed NH 4 + -N and soluble N and P compounds [32] in N-limited soils due to the high C/N.
It is also possible that biochar amendments could interact with other soil environmental factors that influence NH 4 + -N and TP availability, such as the diversity of the soil microbial community, the rates of nutrient mineralization, or changes in soil texture that may influence nutrient retention. Furthermore, biological N-fixation and P-solubilization by PGPR is a long-term process, whereas soil nutrient uptake may also occur due to a large number of rhizobacteria being applied, which may explain why PGPR application decreased soil NH 4 + -N and TP in the short term in our study.
For soil physicochemical properties, co-application of biochar and PGPR significantly increased soil NO 3 --N, inorganic N, and TK in relation to that of separate application of biochar and PGPR.
When biochar and PGPR were co-applied as a soil amendment, the soil fertility increased to a relatively high level, potentially followed by biochar accelerating the conversion of soil NH 4 + -N to NO 3 − -N for soil N retention [20], and thus leading to the significant increase in co-application of biochar and PGPR. Soil inorganic N increased with co-application of biochar and PGPR relative to separate application. This effect agrees with the widespread assumption that PGPR increases nitrogen fixation [33] and that biochar leads to reduced nitrogen leaching [36]. Significant increase of soil TK in co-application of biochar and PGPR compared with separate application of biochar/PGPR has also occurred, the main reason likely being that biochar is difficult for mineralization [36], whereas it easily absorbs potassium (K + ) [37] from K-solubilization using PGPR in the topsoil and may lead to decreased K loss.
Soil Microbial α-Diversity Indices
Our results showed that biochar and biochar+PGPR significantly increased soil bacterial diversity (Simpson index) and richness (ACE and Chao1 indices). One of the factors that may affect the diversity of the soil bacterial community is soil acidity [38], which is slightly increased by biochar in our study. The increased soil microbial richness may be a result of improvements in the soil environment from biochar and PGPR separately and co-applied, such as enhancement of soil structure, inorganic and organic nutrition input, or higher water holding capacity [8,39]. Changes in these environmental factors may accelerate the metabolism and reproduction of microbial communities, and thus elevate soil bacterial richness [40]. PGPR significantly increased the bacterial diversity index relative to co-application of PGPR and biochar, whereas separate application of biochar showed the contrary trend. Soil organic matter content and TN are indicators of potential soil nutrition status, as well as soil bacterial community diversity [19]. Hence, the relative higher diversity (Simpson) and richness (ACE and Chaol) in the PGPR separate application treatment may be a result of sufficient N supply from the biological N-fixing using the strain DU07 amendment. On the contrary, Rondon et al. [41] reported that biochar application may reduce the utilization capacity of phenolic acids carbon sources by bacteria, potentially decreasing diversity of the soil bacterial community.
Soil Bacterial Community Composition (Phylum Level)
The Proteobacteria phylum is one of the most diverse and fastest metabolics in bacteria, and it mainly plays a part in maintaining soil ecological stability via soil nitrogen supply [42]. Acidobacteria is mainly distributed in the terrestrial environment, ocean and activated sludge, demonstrating general adaptability and functional diversity [23]. In the short term of PGPR application, limited available nitrogen could be supplied to bacterial growth and reproduction as biological N-fixing by PGPR was in a time-release manner [18], which is consistent with the decreased relative abundance of Proteobacteria, Acidobacteria, and Bacteroidete in PGPR treatment. Rondon et al. [41] reported that P became the limiting factor in an N-sufficient soil for microbial growth after biochar was applied. In our study, 20 t hm −2 biochar significantly increased soil NO 3 − -N, inorganic nitrogen, TN, and TK concentrations; however, the decreased soil TP concentration in the biochar separate application treatment may become the limiting factor for soil Proteobacteria, Acidobacteria, and Bacteroidetes metabolism and reproduction [43]. Actinobacteria belongs to gram-positive bacteria, and it could degrade cellulose and chitin as the main resource for soil nutrient supply. Huang et al. [44] reported that soil Actinobacteria abundance was significantly increased over three years in the Gurbantunggut desert as a response to nitrogen fertilization application. This is consistent with the result of the positive correlation between the relative abundance of soil actinobacteria and PGPR applied in our research. The concentrate of high-temperature pyrolysis biochar has been demonstrated as an extremely easy decomposed carbon source for soil Actinomyces [45], which could explain the increase in the relative abundance of soil Actinobacteria in our study. The abundance of Gemmatimonadetes was mainly correlated to soil types and environmental factors, as in most studies [46,47]. Gemmatimonadetes was positively related to soil moisture content, which is consistent with the relationship between SWC significantly affected by biochar and relative abundance of Gemmatimonadetes in our study. Most Cyanobacteria has the potential to biologically fix N with its nifH gene [48]; thus, positive correlation between soil Cyanobacteria abundance and soil N content has been reported in previous studies [49,50], which is in line with the correlation between soil TN significantly influenced by PGPR and relative abundance of Cyanobacteria.
Chloroflexi belongs to the gram-negative bacteria and could potentially autotrophically metabolize through photosynthesis; thus, the growth and reproduction of Chloroflexi do not rely on the soil nutrition supply in terrestrial environments. Calderón et al. [30] reported that soil Chloroflexi abundance showed a positive correlation with soil pH in a reciprocal transplant design experiment, which is consistent with the results of our study. Khodadad et al. [47] reported that applying 20-60 t hm -2 biochar may decrease soil Chloroflexi abundance through regulating soil available N, available P, and available K. The decrease in soil Chloroflexi relative abundance after biochar and biochar+PGPR may be a consequence of increased inorganic N content that resulted from biochar application, if sufficient available nitrogen was supplied for plant growth; it thus potentially inhibited the reproduction of Chloroflexi. Soil Firmicutes was reported to increase following biochar amendment [47], which is in line with the increase in the relative abundance of soil Firmicutes after the application of biochar in our study. Koch et al. [51] reported that Nitrosomonas, the dominant genus of Nitrospirae, can hydrolyze urease into NH 4 + and CO 2 in soils under a low concentration of ammonium nitrogen and further increase soil inorganic N. This is consistent with our finding that soil inorganic N was positively correlated with the relative abundance of soil Nitrospirae. Verrucomicrobia is one of the bacteria that takes part in carbon (C) cycling and fixation in acid soil, and in some previous research [52][53][54], Verrucomicrobia were classified as methane fixation bacteria, which potentially transferred methane into CO 2 or biomass through utilizing the NH 4 + -N in soil. These findings are similar to our result that negative correlations were observed between soil NH 4 + -N and the relative abundance of Verrucomicrobia following PGPR amendments (both separate application of PGPR and co-application of PGPR and biochar).
Soil Bacterial Community Composition (Genus Level)
Soil pH influences bacterial distribution in terrestrial environments through regulating the microbial habitat environment. Feng et al. [55] reported that soil pH is a key predictor of the structure of soil bacterial communities, which is in line with the positive correlation between the relative abundance of soil Micrococcaceae and soil pH. Nocardioides potentially biologically degrade polycyclic aromatic hydrocarbons (PAHs) and are widely distributed in plant rhizosphere soil [56]. The increase of the relative abundance of Nocardioides in the biochar treatment was mainly because PAHs easily accumulated on the surface of biochar and may stimulate the reproduction of Nocardioides. Anaerolineaceae is the representative bacteria family of Chloroflexi and mainly takes part in the digestion and degradation of organic matter [57], the response trends of Anaerolineaceae were more similar among treatments than Chloroflexi. Sphingomonas belongs to gram-negative bacteria and potentially decomposes organic compounds (especially poly-chlorophenol) in soil, which is similar to the function of Nocardioides in soil ecological environment maintenance. The increases of the relative abundance of Roseiflexus in biochar amendments were mainly because Roseiflexus is one of the genera of aerobic and thermophilic gram-negative bacteria [58], and biochar applied to soil contributed to the absorption of soil heat and thus promoted the fast growth and reproduction of Roseiflexus. Deslippe et al. [59] reported that the abundance of Gemmatimonadaceae was statistically related to soil temperature, which is in line with the significant increases of the relative abundance of Gemmatimonadaceae following biochar amendments in our research. Carotene (i.e., chlorophyll-a and chlorophyll-b) is widely distributed in Rhodospirillaceae cells, which potentially makes Rhodospirillaceae photosynthesis without oxygen-releasing. Lehmann et al. [60] reported that 20-60 t hm −2 biochar applied could significantly increase the abundance of soil Rhodospirillaceae, which is consistent with the result that co-application or separate application of biochar increased the relative abundance of Rhodospirillaceae in our study. The relative abundance of Acidimicrobiales was significantly increased by all PGPR treatments (PGPR and biochar+PGPR), the main reason being that the lactic acid produced from the activity of probiotics could be used as the carbon resource of Acidimicrobiales. This is also consistent with the finding of the increased diversity of rhizosphere microbial in soil upon the PGPR amendment [56]. Short-term separate application of biochar and PGPR potentially increased the relative abundance of Bacillus in our research, whereas co-application of biochar and PGPR had an inverse impact on them. The probable reason that short-term separate application of biochar and PGPR increased the relative abundance of Bacillus and co-application of biochar and PGPR had an inverse impact on it may be that Bacillus could uptake and utilize the nutrition from the surface of the biochar and PGPR applied in our research and has been certified to be Bacillus megaterium.
Suggestion for Using Biochar and PGPR to Improve Soil Properties and Bacterial Diversity
The significantly positive responses of soil NO 3 − -N, inorganic N, EC, and SWC to biochar and biochar+PGPR applications, and soil TN concentration to PGPR and biochar applications, and soil TK concentration to PGPR and biochar+PGPR demonstrated that co-application or separate application of biochar and PGPR is beneficial to specific soil nutrients in the short term, although decrease in soil TP and NH 4 + -N in PGPR treatment was also observed. Further, the effects on soil physicochemical properties were significantly influenced by the manner of application (co-application or separate application) of the biochar and PGPR used in our study. Furthermore, the significantly positive responses of the relative abundance of soil Actinobacteria, Gemmatimonadetes, and Cyanobacteria to all of the treatments reflected that co-application or separate application of biochar and PGPR was conducive to a specific soil community, based on the phylum level, although decreases in Proteobacteria, Acidobacteria, and Bacteroidetes were also observed. Acidobacteria has been reported to be a predictor of soil health status and generally negatively correlated to soil quality, especially in relatively barren soil [61]. The significant decrease in Acidobacteria relative abundance in our study indicates that soil quality was improved by the applications of biochar and PGPR. The cluster analysis result showed that the bacterial community in PGPR and biochar+PGPR treatments were apparently different from those of the biochar and control in relation to the genus level, indicating that the co-application of biochar and PGPR changed the soil bacterial community relative to the control and separate application of biochar. The RDA results showed that soil physicochemical properties had no significant impact on the phylum level of soil bacterial composition, whereas soil NH 4 + -N significantly influenced the genus level of soil bacterial composition. One limitation of our research was that our study focused only on the relationship between soil bacterial community and soil nutrient contents, but soil nutrient status may also depend on the interaction between plant and soil through nutrient transformations.
Conclusions
Our study indicates that biochar and PGPR amendments modify soil physicochemical properties in soils in Eucalyptus plantations. The co-application of biochar and PGPR significantly increases NO 3 -N, inorganic N, total K, and soil water content, contributing to plant and microbial nitrogen and potassium supply and the improvement of moisture conditions. This study also revealed that the manner of application (co-or separate) of biochar and PGPR significantly influences the bacterial community composition by increasing bacterial OTU richness and diversity and increasing the relative abundance of Actinobacteria, Gemmatimonadetes, and Cyanobacteria.
Results also indicate that soil NH 4 + -N might serve as a sensitive indicator of soil bacterial community composition at the genus level, providing useful information on soil microbial activity and ecological stability. We encourage the co-application of biochar and PGPR as bio-fertilizer, as it has the potential to reduce the heavy demand for artificial N fertilizer in Eucalyptus plantations and enhances soil bacterial diversity.
|
v3-fos-license
|
2024-05-26T15:39:46.671Z
|
2024-05-23T00:00:00.000
|
270040866
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "baa258d04f640ca81d904ee8b0030498e6c4edee",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46428",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "50982fab4f67b6f320be681b7f31df8f890137c5",
"year": 2024
}
|
pes2o/s2orc
|
Effects of different sodium–glucose cotransporter 2 inhibitors in heart failure with reduced or preserved ejection fraction: a network meta-analysis
Background This systematic review and meta-analysis aimed to explore the effects of different sodium–glucose cotransporter-2 inhibitors (SGLT2i) on prognosis and cardiac structural remodeling in patients with heart failure (HF). Methods Relevant studies published up to 20 March 2024 were retrieved from PubMed, EMBASE, Web of Science, and Cochrane Library CNKI, China Biomedical Literature Service, VIP, and WanFang databases. We included randomized controlled trials of different SGLT2i and pooled the prognosis data of patients with HF. We compared the efficacy of different SGLT2i in patients with HF and conducted a sub-analysis based on left ventricular ejection fraction (LVEF). Results We identified 77 randomized controlled trials involving 43,561 patients. The results showed that SGLT2i significantly enhanced outcomes in HF, including a composite of hospitalizations for HF and cardiovascular death, individual hospitalizations for HF, Kansas City Cardiomyopathy Questionnaire (KCCQ) scores, left atrial volume index (LAVi), and LVEF among all HF patients (P < 0.05) compared to a placebo. Sotagliflozin was superior to empagliflozin [RR = 0.88, CI (0.79–0.97)] and dapagliflozin [RR = 0.86, CI (0.77–0.96)] in reducing hospitalizations for HF and CV death. Dapagliflozin significantly reduced hospitalizations [RR = 0.51, CI (0.33–0.80)], CV death [RR = 0.73, CI (0.54–0.97)], and all-cause mortality [RR = 0.69, CI (0.48–0.99)] in patients with HF with reduced ejection fraction (HFrEF). SGLT2i also plays a significant role in improving cardiac remodeling and quality of life (LVMi, LVEDV, KCQQ) (P < 0.05). Among patients with HF with preserved ejection fraction (HFpEF), SGLT2i significantly improved cardiac function in HFpEF patients (P < 0.05). In addition, canagliflozin [RR = 0.09, CI (0.01–0.86)] demonstrated greater safety compared to sotagliflozin in a composite of urinary and reproductive infections of HFpEF patients. Conclusion Our systematic review showed that SGLT2i generally enhances the prognosis of patients with HF. Sotagliflozin demonstrated superiority over empagliflozin and dapagliflozin in a composite of hospitalization for HF and CV death in the overall HF patients. Canagliflozin exhibited greater safety compared to sotagliflozin in a composite of urinary and reproductive infections of HFpEF. Overall, the efficacy of SGLT2i was greater in HFrEF patients than in HFpEF patients.
Introduction
Heart failure (HF) results from either contraction or relaxation dysfunction of the heart, leading to multisystem symptoms and signs.Despite a decrease in the agestandardized prevalence of HF from 1990 to 2019, the reduction is not significant, and HF remains a significant cause of disability and death worldwide (1).Currently, in developed countries (2)(3)(4), such as Britain, France, and the United States, the prevalence of HF ranges from 1.5% to 2.0%, while in developing countries (5) and regions, such as Asia and Africa, it spans from 1.3% to 6.7%.According to the latest definitions by the European and American Heart Association, HF is categorized into three main types, namely, HF with mildly reduced ejection fraction (HFrEF) (LVEF, <40%), HF with moderately reduced ejection fraction (LVEF, 40%-50%), and HF with preserved ejection fraction (HFpEF) (EF, ≥50%).However, many previous meta-analyses defined HFpEF as EF ≥ 45%, which contrasts with the current HF classification.Hence, our study adopts the definition of HFpEF ≥ 50%.
SGLT2i is a new class of antidiabetic medications originally developed for managing diabetes.
Data extraction
Literature screening involved two researchers who independently reviewed articles based on the established inclusion and exclusion criteria.After individual assessments, they cross-checked their selections to ensure consistency.Key information, such as the first author's name, study design, baseline characteristics, and study endpoints, was systematically extracted from each article.
Literature quality evaluation
The quality of the included studies was independently assessed by two researchers using the "risk of bias assessment criteria" from the Cochrane Reviewers' Handbook (version 5.1.0).The evaluation of the RCTs involved the following components: (1) randomized method, (2) allocation concealment, (3) blinding of participant personnel and outcome assessors, (4) completeness of outcome data, (5) absence of selective outcome reporting, and (6) clarity of reasons for losses to follow-up or discontinuation.
Statistical analysis
The statistical analysis was conducted using Stata 15.1 software for network meta-analysis.Relative risks or odds ratios were determined for dichotomous variables, while continuous variables were analyzed using the frequentist methodology in network meta-analysis.Heterogeneity was set as I 2 < 50% and p > 0.01 for the fixed effect model.Otherwise, the random effects model was applied.Pooled results for continuous variables were expressed as the mean difference (MD).The surface under the cumulative ranking (SUCRA) was employed to indicate the preferred ranking of each treatment.Small sample effects were investigated through a network funnel plot.P < 0.05 was considered statistically significant.
Basic characteristics and quality assessment
A total of 6,229 relevant literature sources were identified through a comprehensive search across multiple databases.After thorough screening, 77 RCTs were included in this study.The study encompassed a cohort of 43,561 patients diagnosed with HF, consisting of 11,734 patients with HFpEF and 31,827 patients with HFrEF.A detailed description of the literature search and screening process is illustrated in Figure 1, while the baseline characteristics are outlined in Table 1.Among the 77 selected articles, the outcome indicators related to HFpEF or HFrEF were simultaneously reported in 5 articles, 22 focused on HFpEF, and the remaining studies were centered on HFrEF.Specifically, 5 studies investigated canagliflozin treatment, which involved a total of 453 patients; 55 studies examined dapagliflozin treatment, with a collective enrollment of 16,201 patients; 16 studies utilized empagliflozin, which included 21,024 patients; 1 study explored the efficacy of ertugliflozin, which enrolled 478 patients; 1 study evaluated ipragliflozin treatment, with a cohort of 68 patients; and 4 studies analyzed the effects of sotagliflozin, which involved a total of 5,537 patients.Notably, except for empagliflozin and dapagliflozin, no studies directly compared the remaining four types of SGLT2i.
Bias risk evaluation
The results of the bias risk evaluation are presented in Figure 2 and Supplementary Figure S1.For random sequence generation, 77 studies employing a random number table or a random Excel table were identified as low risk; 33 studies provided detailed descriptions of their allocation concealment procedures; however, the remaining studies lacked such descriptions.Regarding implementation bias, 29 studies were defined as high risk, 15 studies did not provide sufficient details, and the rest were classified as low risk.For the assessment of outcome data, one study was identified as high risk due to insufficient details, one was poorly described, and the others were considered low risk.All studies reported complete data for outcomes.Outcome selection bias indicated that 72 studies were classified as low risk, while 5 required further clarification.The results of other preferences showed that seven studies were defined as high risk, two were poorly described, and the remaining studies were assessed as low risk.
CV death and all-cause death
As shown in Table 2, empagliflozin, dapagliflozin, and sotagliflozin showed no difference in reducing all-cause mortality and cardiovascular mortality compared to placebo.The network plot is shown in Figure 3D, while the ranking based on SUCRA values is presented in Table 3.
NT-proBNP
As presented in Table 2,
KCCQ
As detailed in Table 2, compared to placebo, there was no significant difference in the improvement of KCCQ scores in In bold: values of statistical significance (P < 0.05).
HCT
Table 2 illustrates that empagliflozin [MD = −0.03,CI (−0.04 to −0.02)] significantly increased hematocrit (HCT) in patients with HF compared to placebo.Dapagliflozin did not show any significant difference, and there was no disparity observed among the different SGLT2i treatments.The network plot illustrating these findings is presented in Figure 3O.The ranking based on SUCRA values is as follows: placebo (95%), dapagliflozin (53%), and empagliflozin (1%).
The results of the subgroup analysis The efficacy of SGLT2i in HFrEF patients
The network plot in Figure 4 illustrates that dapagliflozin [RR = 0.44, CI (0.15-1.23)], empagliflozin [RR = 0.75, CI (0.11-5.15)], and sotagliflozin [RR = 0.42, CI (0.05-3.66)] did not significantly reduce the composite of hospitalization for HF and CV death compared to placebo, and there was no difference between the different SGLT2i, as shown in Table 4.The ranking based on SUCRA values is presented in Table 5.Compared with placebo, dapagliflozin significantly improved hospitalization for HF [RR = 0.51, CI (0.33-0.80)] and CV death [RR = 0.73, CI (0.54-0.97)], with no significant differences noted between the different SGLT2i treatments.These findings are outlined in Table 4, and the associated SUCRA rankings are presented in Table 5.Compared to placebo, dapagliflozin reduced all-cause death [RR = 0.69, CI (0.48-0.99)], and dapagliflozin [RR = 0.40, CI (0.18-0.93)] increased a composite of urinary and reproductive infections, with no differences observed between the different SGLT2i treatments.The SUCRA values for these comparisons are also presented in Table 5.
The efficacy of SGLT2i in HFpEF patients
The network plot presented in Figure 5 indicates that compared to placebo, sotagliflozin, dapagliflozin, and empagliflozin did not significantly reduce a composite of hospitalization for HF and CV death, individual hospitalizations for HF and CV death, or all-cause mortality, as detailed in Table 6.No significant differences were observed between the different SGLT2i treatments in these outcomes.The ranking based on SUCRA values is shown in Table 7.
Compared to placebo, dapagliflozin [MD = −272.79,CI (−469.26 to −76.32)] showed significant differences in reducing NT-proBNP levels, while canagliflozin showed no difference.There were no significant differences among the different SGLT2i treatments.The ranking based on SUCRA values is as follows: dapagliflozin (87%), canagliflozin (48%), and placebo (14%).There was no statistical difference in improving LVESV and LVEDV compared to placebo and among the different SGLT2i treatments.The ranking based on SUCRA values is detailed in Table 7.
Consistency and small sample study effect
Comparison-corrected funnel plots were utilized to assess publication bias in the study, focusing on a range of outcome indicators such as a composite of hospitalization for HF and CV death, hospitalization for HF, CV death, all-cause death, urinary and reproductive infections, 6MWT, NT-ProBNP, KCCQ, LAVi, E/e', LVMi, LVEDV, LVESV, LVEF, and HCT.The network funnel plot revealed the presence of small sample effects in the comparison between dapagliflozin and placebo for a composite of hospitalization for HF and CV death (Figure 6A), a composite of urinary and reproductive infections (Figure 6E), CV death (Figure 6C), 6MWT and NT-ProBNP (Figures 6F,G), LVMi (Figure 6K), LVESV (Figure 6M), and LVEDV (Figure 6L).The comparison between canagliflozin and placebo showed a small sample effect in hospitalization for HF (Figure 6B), while the comparison between empagliflozin and placebo indicated small sample effects in CV death (Figure 6C), LVESV (Figure 6M), LVEF (Figure 6N), and allcause death (Figure 6D).
Discussion
This review analyzed 77 RCT involving 43,561 patients using Bayesian network meta-analysis for a comprehensive evaluation.The study encompassed more than 10 outcome indicators, including a composite of hospitalization for HF and CV death, hospitalization for HF and CV death, a composite of urinary and reproductive effects, and an assessment of the cardiac structure.Subgroup analysis was performed based on the ejection fraction of HF.Although the efficacy of SGLT2i varies slightly with different LVEF baselines of patients, it may be beneficial in patients with HF regardless of LVEF baseline.Compared with the placebo, SGLT2i demonstrated a significant advantage in reducing a composite of hospitalization for HF and CV death, hospitalization for HF and CV death, and KCQQ scores while showing no significant impact on reducing all-cause mortality.Indirect comparisons between different SGLT2i suggest improvements in a composite of hospitalization for HF and CV death, hospitalization for HF, and CV death.Sotagliflozin outperformed empagliflozin and dapagliflozin in reducing hospitalization for HF and CV death.However, there is no difference between empagliflozin and dapagliflozin.Nevertheless, given the limited research on sotagliflozin, further investigation is warranted.
Regarding the safety profile in total HF patients, SGLT2i are associated with an increased risk of urinary and reproductive system infections, with dapagliflozin showing the highest risk among them.However, there is no distinction between various types of SGLT2i.A previous meta-analysis (83) showed that, except for dapagliflozin, SGLT2i did not increase incidences of urinary and reproductive system infections, which is consistent with our findings.Moreover, the US Food and Drug Administration has included this potential side effect in its list of adverse reactions.HCT was utilized as a reference indicator to assess low blood volume.The meta-analysis demonstrated that SGLT2i resulted in a rise in HCT relative to placebo, implying an elevated hypotension hazard for SGLT2i.The primary mechanism of action of SGLT2i involves the inhibition of the SGLT2 transporter, predominantly located in the S1 segment of the proximal tubules (84), increasing the excretion of glucose in the urine.Nevertheless, inhibiting SGLT2i also diminishes sodium reabsorption in the proximal tubules, potentially increasing sodium excretion.Previous studies (85,86) reported a correlation between the administration of SGLT2i and a reduction of body weight and blood pressure.
The network meta-analysis results indicated that dapagliflozin and empagliflozin significantly improved NT-proBNP and 6MWT.However, no statistically significant difference was observed among different SGLT2i.While SGLT2i have shown promise in treating HF, it is crucial to determine whether they directly influence the heart's structural function.Therefore, we collected relevant indicators, such as LAVi, E/e', LVMi, and LVEDV, to systematically evaluate the changes in cardiac structure in HF patients treated with SGLT2i.The results showed that SGLT2i significantly reduced LVMi, LVEDV, LAVi, and LVESV and increased LVEF, reflecting significant benefits in improving cardiac systolic and diastolic function.Cardiac anatomy and functional parameters are vital in predicting the prognosis and quality of life in HF patients.Animal studies conducted According to the grading of HFpEF by the European and American Heart Association, this study conducted subgroup analysis based on ejection fraction.The HFpEF (EF ≥ 50%) group did not show significant differences in reducing a composite of hospitalization for HF and CV death, hospitalization for HF, CV death, and all-cause death.No significant differences were observed between different SGLT2i.However, some meta-analyses have shown that (90-92) SGLT2i can reduce a composite of HF and CV death hospitalizations.However, the previous study defined HFpEF as EF greater than 40%, which differs from our study.Therefore, our study should be more convincing.Regarding the safety of HFpEF patients, SGLT2i also present risks of urinary and reproductive infections, with empagliflozin and sotagliflozin being notable culprits.Canagliflozin has demonstrated higher safety compared to sotagliflozin in this aspect.In terms of improving ventricular remodeling, compared to placebo, SGLT2i have shown improvement in LAVi, E/e', LVEF, and LVMi.However, no significant differences were observed in LVESV and LVEDV, and there was no difference between different SGLT2i.The mechanism of HFpEF remains unclear, and left ventricular diastolic dysfunction is considered the main pathophysiological mechanism underlying the occurrence of HFpEF (93).Our research also confirmed that SGLT2i can improve the diastolic function of HFpEF patients.Typically, remodeling the structure of the patient's heart can significantly enhance their prognosis and quality of life.Unfortunately, there has been limited research on the quality of life scores of HFpEF patients; thus, this outcome measure was not included in our analysis.In the subgroup analysis of HFrEF (EF <50%), the network meta-analysis results revealed significant effects of SGLT2i in reducing hospitalization for HF, CV death, all-cause death, NT-ProBNP, and 6MWT.Interestingly, this finding contrasts with the statistical results obtained before conducting the subgroup analysis, indicating the importance of evaluating the contribution of SGLT2i to HF based on ejection fraction.Furthermore, SGLT2i showed significant advantages in improving all-cause death.The indirect comparison revealed no statistical difference between different SGLT2i.Regarding the safety of HFrEF, dapagliflozin significantly increased the risk of a composite of urinary and reproductive infections compared to placebo.Additionally, our analysis revealed that SGLT2i could enhance KCCQ scores in HFrEF patients.Regarding ventricular remodeling, our study revealed that SGLT2i reduced LVMi, LVESV, and LVEDV and increased LVEF.These findings suggest that SGLT2i can enhance diastolic and systolic function in patients with HFrEF, thereby potentially augmenting the prognostic outcomes for these patients.The therapeutic effect of SGLT2i on cardiac structural remodeling was found to be significantly better in HFrEF patients compared to HFpEF patients, with SGLT2i demonstrating superiority in improving cardiac diastolic function in HFpEF patients.Consistent with our findings, a previous meta-analysis (94) showed that empagliflozin had a more significant effect in improving cardiac structure.
This study presents several limitations.① This study mainly focuses on empagliflozin and dapagliflozin, with relatively little research available on canagliflozin, sotagliflozin, ipragliflozin, and ertugliflozin.Future research should explore these alternative SGLT2i to provide a more comprehensive understanding of their efficacy and safety profiles.② Currently, there is only one direct comparison between dapagliflozin and empagliflozin available in current literature, leading to an indirect evaluation of the efficacy and safety of canagliflozin, sotagliflozin, ipragliflozin, and ertugliflozin in treating HF patients.Consequently, a potential bias exists between the reported results and the actual drug performances, underscoring the need for further directcontrolled trials to validate their efficacy and safety profiles.③ There are variations in baseline characteristics such as gender, age, race, and chronic medical conditions among the included studies, potentially resulting in clinical heterogeneity.④ Variations in follow-up durations between the six SGLT2i drug studies and within individual studies for each drug could introduce bias into the study results.⑤ The limited number of studies on HF with HFpEF (EF ≥ 50%) underscores the necessity for more research to substantiate the relevant findings.
Conclusion
In summary, SGLT2i can significantly improve the prognosis of all patients with HF despite the associated increased risk of urinary and reproductive infections.Overall, HF patients benefit from enhanced cardiac remodeling, with those with HFrEF experiencing the most substantial benefits.Indirect comparisons between different SGLT2i revealed no significant differences in HFrEF.Among the six types of SGLT2i, sotagliflozin demonstrated superiority over empagliflozin and dapagliflozin in reducing hospitalization for HF and cardiovascular death in total HF.Canagliflozin exhibited higher safety than sotagliflozin regarding urinary and reproductive infections in patients with HFpEF.Overall, SGLT2i showed better efficacy in patients with HFrEF than those with HFpEF.
FIGURE 3 A
FIGURE 3 A network plot of each comparison in all eligible trials in HFrEF or HFpEF.(A) The network plot of each comparison in terms of a composite of hospitalization for HF and CV death.(B) The network plot of each comparison in terms of hospitalization for HF.(C) The network plot of each comparison in terms of CV death.(D) The network plot of each comparison in terms of all-cause death.(E) The network plot of each comparison in terms of a composite of urinary and reproductive infections.(F) The network plot of each comparison in terms of 6 min walk distance.(G) The network plot of each comparison in terms of NT-proBNP.(H) The network plot of each comparison in terms of KCCQ.(I) The network plot of each comparison in terms of LAVi.(J) The network plot of each comparison in terms of E/e'.(K) The network plot of each comparison in terms of LVMi.(L) The network plot of each comparison in terms of LVEDV.(M) The network plot of each comparison in terms of LVESV.(N) The network plot of each comparison in terms of LVEF.(O) The network plot of each comparison in terms of HCT.
FIGURE 4 A
FIGURE 4 A network plot of each comparison in all eligible trials in HFrEF.(A) The network plot of each comparison in terms of a composite of hospitalization for HF and CV death.(B) The network plot of each comparison in terms of hospitalization for HF.(C) The network plot of each comparison in terms of CV death.(D) The network plot of each comparison in terms of all-cause death.(E) The network plot of each comparison in terms of a composite of urinary and reproductive infections.(F) The network plot of each comparison in terms of 6 min walk distance.(G) The network plot of each comparison in terms of NT-proBNP.(H) The network plot of each comparison in terms of KCCQ.(I) The network plot of each comparison in terms of LAVi.(J) The network plot of each comparison in terms of E/e'.(K) The network plot of each comparison in terms of LVMi.(L) The network plot of each comparison in terms of LVEDV.(M) The network plot of each comparison in terms of LVESV.(N) The network plot of each comparison in terms of LVEF.
FIGURE 5 A
FIGURE 5 A network plot of each comparison in all eligible trials in HFpEF.(A) The network plot of each comparison in terms of a composite of hospitalization for HF and CV death.(B) The network plot of each comparison in terms of hospitalization for HF.(C) The network plot of each comparison in terms of CV death.(D) The network plot of each comparison in terms of all-cause death.(E) The network plot of each comparison in terms of a composite of urinary and reproductive infections.(F) The network plot of each comparison in terms of NT-proBNP.(G) The network plot of each comparison in terms of E/e' (H) The network plot of each comparison in terms of LVMi.(I) The network plot of each comparison in terms of LVEDV.(J) The network plot of each comparison in terms of LVESV.(K) The network plot of each comparison in terms of LVEF.
TABLE 1
Characteristics of included studies.
TABLE 1 Continued
FIGURE 2Risk of bias summary of all eligible RCTs evaluating the effect of SGLT2i in HFrEF or HFpEF.
TABLE 2
Comparison of the efficacy and safety of different SGLT2i in treating HF [RR and mean difference (95% CI)].
TABLE 3
Ranking probability of the efficacy of drug in patients with HF.
TABLE 4
Comparison of the efficacy and safety of different SGLT2i in treating HFrEF [RR and mean difference (95% CI)].
TABLE 5
Ranking probability of the efficacy of drug in patients with HFrEF.
TABLE 6
Comparison of the efficacy and safety of different SGLT2i in treating HFpEF [RR and mean difference (95% CI)].
TABLE 7
Ranking probability of the efficacy of four in patients with HFpEF.
|
v3-fos-license
|
2020-03-19T19:45:39.506Z
|
2019-12-30T00:00:00.000
|
214131740
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journal.unnes.ac.id/nju/index.php/DP/article/download/22504/10002",
"pdf_hash": "193e63a25b46f486fb0ce6badb7cd75ea529d3b2",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46429",
"s2fieldsofstudy": [
"Business"
],
"sha1": "89fb635aa3c51b3bb925f2b0059d328c9e9cc588",
"year": 2019
}
|
pes2o/s2orc
|
The Model of Entrepreneurial Commitment: Strategies for Improv- ing Student Start-Up Business Performance
The classic factor determining the performance of a micro business from the financial side is the working capital management but some of these efforts often fail because of entrepreneurial commitment issues. The existence of such phenomena has pushed further to reveal the role of entrepreneurial commitment in increasing the role of working capital management on business performance. This study aimed to determine the effect of working capital management on student start-up performance that is moderated by entrepreneurial commitment. This research was a quantitative research. The sample in this study was 169 student start-up businesses throughout the Province of Central Java. Analysis of the data in this study used moderate regression analysis (MRA). Through the Moderating Regression Analysis (MRA) the results of this study indicated that entrepreneurial commitment moderated the effect of working capital management on the sustainability of start-up, while working capital had no effect on business performance. The implication of this research was the development of risk and return theory by making entrepreneurial commitment as a reinforcement of the effect of working capital management on business performance.
financial performance of SMEs in Malaysia, the results of his study indicated that there was a significant relationship between working capital management policies and the financial performance of SMEs. While research from Hassan et al (2014) showed non-financial companies registered in Pakistan also showed that working capital management indicators namely inventory and accounts receivable were significantly related to financial performance indicators on Return on Total Assets (ROA), however no significant effect on Return of Equity (ROE). While the findings from, Lin et al (2016) found that the Cash Conversion Cycle (CCC) showed a negative and significant relationship with Return on Total Assets (ROA).
However, research findings from Dalimunthe (2018) showed that WCT (Working Capital Turnover), which is a measure of working capital management, does not significantly affect company performance as measured by ROA. Then, Simon's research, et al (2018) showed that there was an inconsistency of the relationship between working capital management and the financial performance of non-financial companies in Nigeria. This depends on the typical conditions of the business being studied and the performance measures adopted. The results of his research found that accounts receivable management (average age of accounts receivable or billing) and inventory management (average age of inventory) had a negative relationship with ROA (return on assets), while the cash conversion cycle had a positive relationship with ROA.
Based on these contradictions, it is not only a matter of cash management as the main determinant. Many of the entrepreneurs make a determinant commitment to survive in various circumstances (Indrawati et al, 2015). Commitment refers to the psychological condition of someone with a thing through a sense of ownership, ownership of the goals and being ready to accept all challenges (Ezekiel et al, 2018). This is in line with the findings by Rashid et al (2003) showing that organizational commitment had an effect on (ROA and
INTrODUCTION
Working capital is the most liquid company assets such as cash, accounts receivable, and inventory (Agustina et al, 2015). According to Mukhopadhyay (2004) working capital is a very important factor in maintaining liquidity, survival, solvency and business profitability (Sadiq R, 2017). For this reason, management of working capital is needed. Because in general, running a business operation, every company requires sufficient working capital (Olfimarta et al, 2019). Through working capital management, the entrepreneur or manager must determine how much cash will be reserved or cash spent to sustain the company's business continuity during the cash conversion cycle, which shows the length of time required by the company to convert its product into cash (Lin et al. , 2016). On average, work capital management that is routinely carried out is activities related to maintaining cash, inventory, and credit risk assessment (Orobia et al, 2016).
Cash management is related to efforts to speed up cash turnover so that it can be used to finance the company's routine activities so that it does not interfere with the company's financial condition and liquidity (Olfimarta et al, 2019). Inventory management is related to the problem of determining the amount of capital allocation in inventory appropriately. An error in determining the amount of inventory will reduce corporate profits (Olfimarta et al, 2019). The company will not be able to operate and sell if it does not have sufficient inventory (Margaretha et al, 2016). Working capital management for accounts receivable can be done through efforts to improve credit policies to be more effective, for example by providing sales discounts so that it attracts customers to immediately pay early by utilizing deductions (Agustina et al, 2015).
Many studies have tried to prove the effect of working capital management on company performance. In a study conducted by Azhar et. al (2017) to investigate the effect of working capital management policies on the ROE) as indicators of financial performance, but did not have an effect on liquidity ratios on companies listed on the Kuala Lumpur Stock Exchange. Meanwhile, research conducted by Emhan et al (2016) andIbrahim, et al (2018) proved that commitment strengthened performance. Therefore, it is very interesting to study more closely related to the relationship between working capital management and start-up business performance with the addition of a moderating entrepreneurial commitment as an amplifier of the effect of the relationship between working capital management and start-up business performance. This model was developed through the collaboration of Risk & Return Theory and organizational commitment theory which was then derived into entrepreneurial commitment as a reinforcement of the effect of working capital management on business performance.
Based on preliminary studies conducted by researchers, most business start-up students experienced difficulties in working capital management, thereby affecting the performance of start-up businesses. There were some startup businesses that could not continue their business because there was no working capital to buy raw materials, pay labour and pay debts. There were also some students who continued their business because they had a strong entrepreneurial commitment and good working capital management.
Capital can be described as the heart of the company if the company does not have capital, then the company will not be able to live. Working capital is an investment of funds to the company's current assets, such as cash, securities, accounts receivable and inventories (Fahmi, 2016). In investing, of course there will be a variety of risks encountered besides getting benefits from these investments. Risk and return theory is one of the most important theories in portfolio management. The relationship of risk and return has come to the attention of researchers in the fields of business, economics, and finance (Mukherji et al 2008). Every decision for investment is always related to risk and return (Richard et al, 2008). When utilizing idle funds, investors try to choose investments that promise returns with a variety of risks. The hope is to be able to get a return on the investment (Prabawa & Lukiastuti, 2015). The main focus of risk seekers is profit opportunities, so they must always consider the advantages and disadvantages. Therefore, it is necessary to have good working capital investment management to increase profits and reduce risk as efficiently as possible (Tiegen & Burn, 1997). This study aimed to determine the effect of working capital management on student start-up business performance that is moderated by entrepreneurial commitment.
Working capital management can be seen from the working capital or cash turnover, accounts receivable turnover, and inventory turnover (Wibowo & Wartini, 2012). Working capital turnover is also called the Cash Conversion Cycle (CCC) which is defined as the bound period of working capital, or the length of time needed by a company to convert its product into cash (Lin et al, 2016). The longer the cash conversion process, the greater the CCC value and will cause inhibition of company activity which causes a decrease in company productivity so that it will reduce sales volume while reducing the profit generated by the company (Iswandi, 2012). According to Olfimarta and Wibowo (2019), the greater the amount of cash is, the higher the level of liquidity is so the risk of inability to pay financial obligations is smaller. Even so, the greater the amount of cash is saved, the more money is idle, the company will lose opportunity costs and thus reduce profits.
The next indicator is that accounts receivable represent company bills to third parties incurred by credit sales, but the company must continue to monitor and follow up on accounts receivable (Hasundungan & Herawati, 2018). Therefore we need a management of accounts receivable so that accounts receivable can be paid off immediately. Because the longer the payment of accounts receivable is, the longer the company will rotate the funds, which will hamper the company's operations which will cause a decrease in the level of sales as well as profits derived by the company (Iswandi, 2012). Next is the inventory turnover indicator. If the indicator is not good then the company will not be able to produce or sell its products (Hasundungan & Herawati, 2018). However, too much inventory in storage will also cause risks to the company, such as wear, damage and loss, also increasing interest costs and storage and maintenance costs, increasing the possibility of loss due to damage, decreasing quality, thus reducing company profits ( Iswandi, 2012;Olfimarta and Wibowo, 2019). For this reason, it is necessary to have good inventory management so that there is an adequate stock of inventory with a high level of sales (Margaretha & Oktaviani, 2016).
Commitment is the key to success for entrepreneurs. With the full commitment to their business, they will be ready for what they will do with all their heart, soul and body even though they have a long time and truly believe in the products they produce (Sahabuddin, 2013). Commitment is the key and determinant of organizational performance (Mowday et al., 2013). Commitment is characterized by an active relationship with the organization with high loyalty and self-alignment in the organizations or businesses that are occupied (Brahmasari and Sungkono, 2009). Commitment will give a view that he is an integrated part of the organization. Threats to the organization are also a threat to themselves, so that they will be more actively and creatively involved in various activities within the organization to help suppress the threats that occur (Irefin & Mechanic, 2014). Commitment will bring individuals to try to give everything they have in order to achieve their goals (Sidiqqoh and Alamsyah, 2017). Individuals will be ready to devote their time, energy, and mind. Commitment can provide a stimulus to entrepreneurs to continuously improve their performance so that it will have an impact on increasing business performance in the businesses they run. As the research of Suryana et al (2019) proved that the performance of SMEs was affected by organizational commitment both simultaneously and partially. A high level of entrepreneurial commitment determines the high level of achievement of business ventures (Ezekiel et al, 2018). Therefore, this study proposes a hypothesis.
Commitment will encourage someone to work consistently with a sincere heart to still maintain the values and objectives to be achieved in order to achieve success (Sahabuddin, 2013). Siddiqoh and Alamsyah (2017) stated that the commitment of entrepreneurs has an important role in improving the business performance of entrepreneurs. The high performance of entrepreneurs will affect the company's business performance. This shows that in improving business performance, an entrepreneur cannot rely on how to manage his work model, even though the research of Hassan et al. (2014) and Kusuma & Bachtiar, 2018 showed that there was a material effect between working capital management and company performance because often working capital management is also less effective in improving business performance (Dalimunthe, 2018;Simon, et al. (2018). So it needs to be strengthened by the commitment of entrepreneurs. Lack of entrepreneurial commitment will have an impact on the low performance of the businesses that are run (Sahabuddin, 2013) Aside from that, commitment also proved effective in the efforts of managers to earnestly save the organization (Ibrahim et al., 2018). Thus, the statement led to the hypothesis: Based on previous literature reviews, this study proposes eleven hypotheses consisting of three alternative hypotheses and two null hypotheses. Figure 1 is a model developed for this research that illustrates a summary of this hypothesis.
METHODS
The population of this study was all of the managers or owners of student start-ups throughout the Province of Central Java, with a purposive sampling method with the sample size of this study referring to the recommendations of Kock and Hadaya (2018) using the inverse square root method, which states the best sample in PLS analysis -SEM is 160. In distributing questionnaires to avoid lack of research data, we sent a random questionnaire to 200 students in March 2019. Then from the 200 student questionnaire, the questionnaire rate which could be analysed was 80% or 169 respondents obtained from March to August 2019.
Variables in the study consisted of: working capital management, as seen from working capital or cash turnover, accounts receivable turnover, and inventory turnover (Wibowo & Wartini, 2012). Meanwhile, entrepreneurial commitment was measured by the perception of students' desire to truly strive to be entrepreneurs and believes that entrepreneurship is the right choice for their future as measured by the dimensions of entrepreneurial commitment (Meyer & Allen, 1991) including affective commitment, continuance commitment, and normative commitment while for the indicators of the sustainability of internal startup factor Wood (2006), namely: capital and employees, indicators of the sustainability of external start-up factors according to Wang and Chang (2009) namely: suppliers and customers.
After the data was collected from the field, further processing was done (editing and data conversion), then descriptive statistics were performed. In the variable cash turnover, accounts receivable turnover, inventory, capital, number of employees, suppliers and customers was in the form of average, standard deviation, minimum and maximum. While the entrepreneurial commitment variable was calculated on average and which were then categorized into 5 criteria: very high, high, medium, low and very low based on a Likert scale of 1-5. While for inferential analysis was carried out using Warp PLS. Then proceed to inferential statistical analysis using WARP PLS-SEM moderation namely (1)
rESULT AND DISCUSSION
Descriptive analysis showed that cash turnover had a maximum value of Rp 35,000 (thousand) with a minimum of Rp 100 (thousand) and with a mean of Rp 4,477.5 (thousand) while Accounts receivable Turnover had a maximum value of Rp. 15,000 (thousand) with a minimum value of Rp. 55 (thousand) and with a mean of Rp. 1,287,125. Inventory Turnover had a maximum value of Rp. 2,500 (thousand) with a minimum value of Rp. 200 (thousand) and with a mean of Rp. 388,750. Capital had a maximum value of Rp. 68,000 (thousand) with a minimum value of Rp. 114 (thousand), and with a mean of Rp. 18,940,075. Employee Variable had a maximum value of 7 with a minimum value of 1 and with a mean of 3. Supplier Variables had a maximum value of 20 with a minimum va- Table 1.
Measurement Analysis
Statistical test results on the evaluation of the measurement model in the Warp PLS analysis were carried out to test the construct validity and reliability (Kock., 2019). Assessment at this stage aimed to determine whether each item of instrument used to measure the construct of the variable manifest / indicator of latent variables (management of working capital, entrepreneurial commitment, and business performance). As for the construct validity test consisting of convergent validity and discriminant validity as well as construct reliability, it is in the Table 2.
Based on the results of the output showed the factor loading value and AVE value on the three variables, namely working capital management, entrepreneurial commitment and business performance initially had a factor loading value below 0.5. After elimination of some of these items, the AVE value increased above the cut value of 0.5 and composite reliability also increased, so that after elimination, the items were valid and reliable.
Based on the diagonal value of the correlation between the latent variable and its error seen in Table 3, then all of these variables had the greatest correlation value on this variable than any other variable. It can be concluded that all items in each of these variables met the discriminant validity criteria. So based on the analysis of convergent validity and construct reliability and discriminant validity analysis, these variables met the criteria for construct validity and reliability, so that an inner model analysis (model fit and quality indices) can be performed. The results of testing the fit and quality indices models can be seen in theTable 4. (2019) The results of the ten tests of the fit and quality indices model test, from the Average path coefficient (APC) test, Average Rsquared (ARS), Average adjusted R-squared (AARS), Average block VIF (AVIF), Average full collinearity VIF ( AFVIF), Tenenhaus GoF (GoF), Sympson's paradox ratio (SPR), R-squared contribution ratio (RSCR), Statistical suppression ratio (SSR), Nonlinear bivariate causality direction ratio (NLBCDR) all met the accepted criteria, and ideal, this showed that the model can thus be performed for regression testing with Warp PLS SEM.
As mentioned in the literature review, three hypotheses had been formulated. In testing Hypothesis 1, Hypothesis 2 proved to be rejected because the p value of the hypothesis was 0.488, 0.478. However, hypothesis 3 was accepted so that entrepreneurial commitment moderated the effect of inventory turnover on significant business performance with an absolute moderation model with p value <0.001 being further below the cut value, which is shown in the Table 5.
The first hypothesis testing that rejected the conception that working capital management affected business performance. Based on the perspective of working capital management can be seen from the working capital or cash turnover, accounts receivable turnover, and inventory turnover (Wibowo & Wartini, 2012). Working capital turnover is also called the Cash Conversion Cycle (CCC) which is defined as the bound period of working capital, or the length of time needed by a company to convert its product into cash (Lin et al, 2016). The longer the cash conversion process, the greater the CCC value and will cause in- (2019) hibition of company activity which causes a decrease in company productivity so that it will reduce sales volume while reducing the profit generated by the company (Iswandi, 2012). According to Olfimarta and Wibowo (2019), the greater the amount of cash is, the higher the level of liquidity is so the risk of inability to pay financial obligations is smaller. Even so, the greater the amount of cash is saved, the more money is idle, the company will lose opportunity costs and thus reduce profits. The next indicator is that accounts receivable represent company bills to third parties incurred by credit sales, but the company must continue to monitor and follow up on accounts receivable (Hasundungan & Herawati, 2018). Therefore we need a management of accounts receivable so that the accounts receivable can be repaid. Because the longer the payment of accounts receivable is, the longer the company will rotate the funds, which will hamper the company's operations which will cause a decrease in the level of sales as well as profits derived by the company (Iswandi, 2012). Next is the inventory turnover indicator. If the indicator is not good then the company will not be able to produce or sell its products (Hasundungan & Herawati, 2018). However, too much inventory in storage will also cause risks to the company, such as wear, damage and loss, while also increasing interest costs and storage and maintenance costs, increasing the possibility of loss due to damage, decreasing quality, thus reducing company profits ( Iswandi, 2012;Olfimarta and Wibowo, 2019). For this reason, it is necessary to have good inventory management so that there is sufficient inventory stock with high levels of sales (Margaretha & Oktaviani, 2016).
This empirical test has rejected the second hypothesis which states that entrepreneurial commitment affects business performance. Commitment is the key to success for entrepreneurs. With the full commitment to their business, they will be ready for what they will do with all their heart, soul and body even though they have a long time and truly believe in the products they produce (Sahabuddin, 2013). Commitment is the key and determinant of organizational performance (Mowday et al., 2013). Commitment is characterized by an active relationship with the organization with high loyalty and self-alignment in the organizations or businesses that are occupied (Brahmasari and Sungkono, 2009). Commitment will give a view that he is an integrated part of the organization. Threats to the organization are also a threat to themselves, so that they will be more actively and creatively involved in various activities within the organization to help to suppress the threats that occur (Irefin & Mechanic, 2014).
Commitment will bring individuals to try to give everything they have in order to achieve their goals (Sidiqqoh and Alamsyah, 2017). Individuals will be ready to devote their time, energy, and mind. Commitment can provide a stimulus to entrepreneurs to continuously improve their performance so that it will have an impact on increasing business performance in the businesses they run. As the research Suryana et al (2019) proved that the performance of SMEs was affected by organizational commitment both simultaneously and partially. A high level of entrepreneurial commitment determines the high level of achievement of business ventures (Ezekiel et al, 2018).
The acceptance of the third hypothesis indicated that Entrepreneurial Commitment Moderated the Effect of Working Capital Management on Business Performance. Commitment will encourage someone to work consistently with a sincere heart to still maintain the values and objectives to be achieved in order to achieve success (Sahabuddin, 2013). Siddiqoh and Alamsyah (2017) stated that the commitment of entrepreneurs has an important role in improving the business performance of entrepreneurs. The high performance of entrepreneurs will affect the company's business performance. This shows that in improving business performance, an entrepreneur cannot rely on how to manage his work model, even though in the research of Hassan et al. (2014) that there was a material effect between wor-king capital management and company performance because, working capital management is often less effective in improving business performance (Dalimunthe, 2018;Simon, et al. (2018). So it needs to be strengthened by the commitment of entrepreneurs. Lack of entrepreneurial commitment will have an impact on the low performance of the businesses that are run (Sahabuddin, 2013). In addition, commitment has also proven to be effective in the efforts of managers to earnestly save their organizations (Ibrahim et al, 2018).
CONCLUSION
Working capital management both from the turnover indicators (cash, accounts receivable and inventories) applied in the student start-up business proved to have no impact on the performance of the start-up business, entrepreneurial commitment also failed to prove to improve the performance of the student start-up business. However, the interaction of entrepreneurial commitment variables and working capital management can significantly improve student start-up business performance. This finding proved that the synthesis of risk and return theory and entrepreneurial commitment was effectively proven in the case of student start-up business performance.
Empirical facts showed that entrepreneurial commitment had not been carried out properly, so that the implementation of working capital had not yet been fully applied to student start-up businesses. This study only measured business performance on the dimensions of financial performance so that it had not measured the non-financial performance of start-up business run by students. Therefore further research needs to identify business performance from marketing management, human resources and production.
|
v3-fos-license
|
2018-07-17T00:26:53.728Z
|
2015-06-01T00:00:00.000
|
51851325
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/bjps/a/f7JkbHgVC7TTC3bGNwK7S6p/?format=pdf&lang=en",
"pdf_hash": "438301843fefd7eb43801df0b806633c75769009",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46431",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "438301843fefd7eb43801df0b806633c75769009",
"year": 2015
}
|
pes2o/s2orc
|
Review of the efficacy and safety of over-the-counter medicine
Over-the-counter medicines are available without prescription because of their safety and effectiveness, to treat minor ailments and symptoms. The objective of the study was to analyze the availability and quality of systematic reviews published about nonprescription medicines, identifying the groups for which there are gaps in evidence. We identified published articles through the Cochrane Database of Systematic Review and MEDLINE, from the start of the database until May 2012, using the search terms “nonprescription drugs,” “over the counter,” and “OTC.” We searched for articles that describe systematic reviews addressing the efficacy and safety of drugs dispensed without a prescription, according to the lists published by the Association of the European Self-Medication Industry and in Brazil, in the clinical conditions listed in Groups and Specified Therapeutic Indications. We included 49 articles, 18 articles were of moderate quality and 31 of high quality. Of the studies, 74.5% demonstrated efficacy in favor of the use of drugs evaluated. Of the 24 studies that evaluated safety, 21% showed evidence unfavorable to the drug. Overall, the evidence found in the studies included in the overview is favorable to the use of the drugs evaluated. However, there are gaps in evidence for some therapy groups.
INTRODUCTION
According to the World Health Organization (WHO), over-the-counter (OTC) medicines are drugs approved by health authorities to treat minor ailments and symptoms.They are available without prescription because of their safety and effectiveness, if used in accordance with the guidelines available on the package inserts and on labels (ABIMIP, 2012).
In Brazil, this class of drugs is regulated by Board Resolution (RDC) of the National Agency for Sanitary Surveillance (ANVISA) Number 138 of May 29, 2003, which provides for the sale category of products (ANVISA, 2003).According to ANVISA, OTC drugs can be registered as medicines that meet the conditions of the list of Groups and Specified Therapeutic Indications (GITE).
The Association of the European Self-Medication Industry (AESGP) has lists of nonprescription drugs marketed in 36 countries.These lists group the drugs according to their chemical, pharmacological, and therapeutic characteristics, according to the Anatomical Therapeutic Chemical (ATC) Classification System established by WHO.In this case, the symbol "OTC" means that at least one dosage or form of the drug has the legal status of nonprescription medication in some of the countries considered (ABIMIP, 2012).
Although OTC medicines are considered relatively safe drugs to be dispensed without a prescription, some studies, like the one conducted by Smith et al. (2012), call into question their efficacy and safety because of the lack of good quality trials.In this sense, evidence-based health (EBH) assumes that the behavior of professionals in clinical practice should be based on the best scientific evidence available at the time (Guaudard, 2008).Thus, EBH integrates clinical expertise with the ability to analyze and apply rational scientific information (Lopes, 2000;Manchikanti et al., 2009).
With respect to studies of treatment, systematic reviews (SR) are considered currently to provide the highest level of evidence in relation to any clinical question (El Dib, 2007), as the SR and meta-analyses are useful for monitoring important innovations in healthcare.A systematic review of systematic reviews (overview) is a survey designed primarily to summarize data from several reviews, focusing on the effects of clinical interventions on a health condition; this is carried out in order to analyze the quality of systematic reviews and inform readers how failures may influence the results (Higgins, Green, 2011).
The aim of this study was to analyze the availability and quality of systematic reviews published about nonprescription medicines, identifying the groups for which there are gaps in evidence.
Data sources
We performed the overview of systematic reviews, following a pre-established protocol, according to the PRISMA model (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) (Moher et al., 2009).
A systematic search was performed on the Cochrane Database of Systematic Reviews (CDSR) and MEDLINE (via PubMed) over the period from the beginning of the database until May 2012.To seek systematic reviews in PubMed the following strategy was used for the search: (systematic review*[tiab] OR meta-analysis [pt] OR meta-analysis [tiab] OR systematic literature review [tiab] OR "Cochrane Database Syst Rev"[Journal] OR (search*[tiab] AND (medline or embase OR peer-review* OR literature OR "evidence-based" OR pubmed OR IPA or "international pharmaceutical abstracts"))) NOT (letter[pt]
Study selection
After obtaining the articles, all the steps of the process were performed by two independent reviewers (GCH and AISC), and discrepancies were resolved by consensus.In the absence of agreement, the assistance of a third reviewer (CJC) was requested.The process for selecting the studies followed the PRISMA model (Moher et al., 2009): (a) all articles were analyzed based on their titles and abstracts (screening); (b) the articles deemed relevant were then fully analyzed by two reviewers, noting the inclusion and exclusion criteria (eligibility); and (c) articles that met all the criteria were included in the data collection (inclusion).Articles that generated questions during screening were included and passed through the eligibility stage for examination in full.In addition, we performed a manual search of references in all articles read in full.A search was not performed for unpublished articles or articles in conference proceedings.To be included, articles had to fulfill the following criteria: there was a systematic review, with or without meta-analysis; they addressed the efficacy and safety of nonprescription medicines considered according to lists released by AESGP and OTC drugs in Brazil; the clinical conditions were listed in the GITE list.
The exclusion criteria were: (a) items whose full text was not available through the databases or after contact with the author; (b) items that did not describe or were overviews of systematic reviews; (c) systematic reviews that included only prescription drugs; (d) articles that evaluated the use of medicinal herbs, vitamins, and supplements; (e) items that were defined as systematic reviews, but whose full text did not comply with items 4, 7 and 9 of the PRISMA checklist (provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design; describe all sources of information in the search and date last searched; indicate the selection process of the study) (Moher et al., 2009).
Data extraction
Data were extracted in duplicate and disagreements were resolved by consensus between the reviewers.At the stage of full reading a critical evaluation of the studies was made in order to verify the methodological quality of the reviews and possible sources of bias present in each review through the Assessment of Multiple Systematic Reviews (AMSTAR) instrument developed by Shea et al. (2009).
The AMSTAR total score was obtained by adding one point for each "yes" answer, while any other answer did not receive a point.The score ranged from 0 (zero), as the worst quality, to 11 (eleven) as the best quality.In addition, studies were categorized as proposed by the work of Mikton and Butchart (2009) in which a score from 0 to 4 indicated a review of low quality, from 5 to 8 indicated moderate quality, and from 9 to 11 indicated high quality.
To collect data on efficacy and safety we used a second standard instrument developed by the authors.
RESULTS AND DISCUSSION
By searching the databases, 228 articles of potential relevance were found, but only 23 were included.The supplementary manual search selected 26 articles, so a total of 49 were included in the overview (see flowchart as shown in Figure 1).Of these, six articles were published in the 90s and others after 2000.The collected data from the included studies are shown in Table I.
Three articles were outdated and were therefore replaced by updating the manual search (Smith et al., 2012).Among the articles included by supplementary manual search, two were withdrawn from The Cochrane Library for lack of update (De Sutter et al., 2009;Taverner, Latte, 2009), as recommended by the Cochrane Collaboration (Higgins, Green, 2011).
In order to widen the scope of this overview, we included an extensive list of such OTC drugs marketed in 36 countries and in Brazil.Then, some of the medicines included are not OTC drugs in Brazil, for example, diclofenac, sumatriptan, and ranitidine are OTC medicines in some countries in Europe.
Considering the conditions encountered in clinical studies, it can be seen that most of the articles evaluated patients with acute or chronic pain (35.4%) or specifically migraine (14.6%).Studies to evaluate the efficacy and safety of medicines used to combat pain are extremely important, since self-medication in this context is a reality.Other clinical conditions assessed in this overview were quitting smoking (10.4%), cough (8.3%), symptoms of common cold (8.3%), fever and/or pain (8.3%), constipation (8.3%), fungal infections (4.2%), and dyspepsia (2.1%).Only one study (Jenkins, Costello, Hodge, 2004) assessed the safety of the drug against various clinical conditions.
Compared with the GITE list, various therapeutic groups and clinical indications have not been evaluated by systematic reviews, using the proposed strategy.There are, therefore, gaps for groups such as antidiarrheal, antispasmodic, antiparasitic, and antiseptic medicines in general.
Of the 24 studies that evaluated safety, five (21%) showed evidence unfavorable to the drug, due to significant side effects presented (Edwards et al., 1999;Jenkins;Costello, Hodge, 2004;De Sutter et al., 2009;Derry, Moore, 2012;Derry C. et al., 2012).Data of security should be considered when dispensing medicines, especially in regard to OTC drugs, to encourage rational use of these medications.
With regard to studies on the efficacy of OTC medicines, 35 articles (74.5%) showed evidence favorable for the use of the intervention in at least one of the groups of patient studied.Three studies (6.4%) did not show evidence favorable to the drug's efficacy.In one of these, the use of antihistamines was not proven effective in the treatment of common cold symptoms and showed more side effects compared to placebo (De Sutter et al., 2009).Another study evaluating the efficacy of nicotine replacement therapy (NRT), without prescription, concluded that the superiority of OTC NRT over unaided smoking cessation had not been demonstrated convincingly (Walsh, 2008).
Likewise, the use of medications for coughs were reported in children and the results do not demonstrate greater efficacy than a placebo, noting the small number of trials found (Schroeder, Fahey, 2002b).The other three studies related to treatment of coughs showed no good evidence in favor of the use of these medicines (Schroeder, Fahey, 2002a, Chang, Cheng, Chang, 2012;Smith et al., 2012).Moreover, some systematic reviews were inconclusive or showed no evidence for the use of the drug, suggesting more studies.
Concerning the methodological quality of systematic reviews, according to the evaluation by the AMSTAR instrument, 18 reviews were of moderate quality (namely, AMSTAR score of 5-8), 31 were of high quality (9-11), and no reviews received a score of 0 to 4, which indicates a review of low quality.These data suggest that the methodological quality of reviews showed that the majority of the published studies are of good quality.Moreover, we noticed that all the systematic reviews conducted by the Cochrane Collaboration were of high quality.
Among the items examined those that mostly did not receive the answer "yes" were item numbers 10 and 11 of the AMSTAR instrument.Item 10 verifies that the likelihood of publication bias was assessed, which is the tendency for studies with positive results to be more often published than studies with negative results (Zhou, Obuchowski, McClish, 2002).The possibility of the occurrence of this type of bias was not reported in 42 articles (86%).Item 11 evaluates whether a conflict of interest was included in the study; namely, potential sources of support should be clearly recognized in both the systematic review and in the studies included (Shea et al., 2009).Although some studies have reported sources of support, there was no explicit statement of this in 32 (65%) of the reviews.Similar results were found in the literature (Santaguida et al., 2013;Remschmidt, Wichmann, Harder, 2014), demonstrating the need to improve the description of potential conflicts of interest and publication bias.
Relating the quality of studies to clinical conditions, we note that all studies on constipation showed moderate methodological quality.The same occurred with three studies (of four) evaluating fever and/or pain and three studies (of five) on quitting smoking.In this case, in addition to the items described above, some reviews failed in describing other items, such as showing a list of excluded studies, describing if the studies undertook duplicate study selection and extraction, and rarely undertaking meta-analyses.
Regarding the design of the studies found in the systematic review, in just one review (Hughes et al., 2011) did the design not include a randomized clinical trial (RCT).According to the author, prospective controlled trials are the best way to assess effectiveness by determining the effect of therapy on the actual conditions of use.Another four studies, in addition to RCTs, employed non-randomized studies (Hughes et al., 2003), observational studies (Southey, Soares-Weiser, Kleijnen, 2009), quasi-randomized trials (Meremikwu, Oyo-Ita, 2009;Stead et al., 2012), and cohort studies (Walsh, 2008).The Cochrane Collaboration focuses primarily on systematic reviews of randomized clinical trials because they are more likely to provide unbiased information than other study designs (Higgins, Green, 2011).
The sale of medicines without the need to present a prescription suggests that they are safe and effective.Thus, systematic reviews, which correspond to the highest level of evidence to evaluate the efficacy and safety of nonprescription medicines, are essential contributions to their rational use.The present study examined 49 systematic reviews published up to May 2012, which show no evidence of efficacy or safety for at least three of the nine clinical conditions assessed.This leads us to think that the use of such medicines in certain clinical conditions is questionable.
Our overview also presents some limitations.In our search strategy, we chose a query with greater specificity, instead of a higher sensitivity, for considering the large number of OTC drugs marketed.Moreover, many of the authors of systematic reviews on OTC medications do not use general descriptors such as "nonprescription drugs" or "OTC," which makes the location of these in databases difficult.In order to minimize the possible omission of studies due to this fact, we conducted a manual search of work in the bibliographies of all the studies read in full.Finally, we constrained the inclusion of systematic reviews only to those health conditions considered treatable with OTC medications.This was necessary, considering the existence of several reviews involving OTC drugs for conditions which require prior medical diagnosis, which would be outside the scope of our work.In order to avoid biases related to this aspect, we included an extensive list of such OTC drugs marketed in 36 countries and in Brazil.
CONCLUSION
The methodological quality of systematic reviews of nonprescription medicines, according to studies included, is moderate to high.Thus, the quality of the available evidence in these is good enough for their use in clinical practice.The evidence found in the studies included in the overview is favorable to the use of most of the drugs evaluated, like topical antifungal, analgesics, and antiinflammatory drugs.However, some systematic reviews were inconclusive or showed no evidence for the use of the drug, suggesting more studies, as in the case of nicotine replacement therapy without prescription, medications for coughs, and chronic constipation.Then, there are therapy groups for which there are gaps in evidence, necessitating the need for studies in this area.
FIGURE 1 -
FIGURE 1 -Flowchart for the selection of systematic reviews included in the overview.
TABLE I -
Main characteristics of systematic reviews on OTC drugs
TABLE I -
Main characteristics of systematic reviews on OTC drugs (cont.)
TABLE I -
Main characteristics of systematic reviews on OTC drugs (cont.)
|
v3-fos-license
|
2017-12-04T21:35:25.826Z
|
2017-11-20T00:00:00.000
|
4670089
|
{
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://drops.dagstuhl.de/opus/volltexte/2018/8355/pdf/LIPIcs-ITCS-2018-38.pdf",
"pdf_hash": "7014307ccb6c20b234be826adf6634b40c3858ce",
"pdf_src": "ACM",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46433",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"sha1": "c91096a2849fdfbd09f19abf66980a52f6361348",
"year": 2020
}
|
pes2o/s2orc
|
Edge Estimation with Independent Set Oracles
We study the task of estimating the number of edges in a graph, where the access to the graph is provided via an independent set oracle. Independent set queries draw motivation from group testing and have applications to the complexity of decision versus counting problems. We give two algorithms to estimate the number of edges in an n-vertex graph, using (i) polylog(n) bipartite independent set queries or (ii) n2/3 polylog(n) independent set queries.
INTRODUCTION
We investigate the problem of estimating the number of edges in a simple, unweighted, undirected graph G = ( n , E), where n := {1, 2, . . . , n} and m = |E|. Here, the only access to the graph dimensions, for a set P of n points, half-space counting queries (i.e., what is the size of the set |P ∩ h|, for a query half-space h) can be answered in O (n 2/3 ) time, after near-linear time preprocessing. However, emptiness queries (i.e., is the set P ∩ h empty?) can be answered in O (log n) time. Aronov and Har-Peled [6] used this to show how to answer approximate counting queries (i.e., estimating |P ∩ h|), with polylogarithmic emptiness queries.
As another geometric example, consider the task of counting edges in disk intersection graphs using GPUs [24]. For these graphs, IS queries decide if a subset of the disks has any intersection (this can be done using sweeping in O (n log n) time [11]). Using a GPU, one could quickly draw the disks and check if the sets share a common pixel. In cases like this-when IS and BIS oracles have fast implementations-algorithms exploiting independent set queries may be useful.
Decision versus counting complexity. A generalization of IS and BIS queries previously appeared in a line of work investigating the relationship between decision and counting problems [15,33,34]. Stockmeyer [33,34] showed how to estimate the number of satisfying assignments for a circuit with queries to an NP oracle. Ron and Tsur [31] observed that Stockmeyer implicitly provided an algorithm for estimating set cardinality using subset queries, where a subset query specifies a subset X ⊆ U and answers whether |X ∩ S | = 0 or not. Subset queries are significantly more general and flexible than IS and BIS queries because S corresponds to the set of edges in the graph and X is any subset of pairs of vertices. Namely, IS and BIS queries can be interpreted as restricted subset queries. In particular, the algorithms mentioned cannot be implemented directly using IS or BIS queries.
Indeed, consider subset queries in the context of estimating the number of edges in a graph. To this end, fix |S | = m (i.e., the number of edges in the graph) and |U | = n 2 (the number of possible edges). Stockmeyer provided an algorithm using only O (log log m poly(1/ε)) subset queries to estimate m within a factor of (1 + ε) with a constant success probability. Note that for a high probability bound, which is what we focus on in this article, the algorithm would naively require O (log n · log log m poly(1/ε)) queries to achieve success probability at least 1 − 1/n. Falahatgar et al. [22] gave an improved algorithm that estimates m up to a factor of (1 + ε) with probability 1 − δ using 2 log log m + O ((1/ε 2 ) log(1/δ )) subset queries. Nearly matching lower bounds are also known for subset queries [22,31,33,34]. Ron and Tsur [31] also studied a restriction of subset queries, called interval queries, where they assume that the universe U is ordered and the subsets must be intervals of elements. We view the independent set queries that we study as another natural restriction of subset queries.
Analogous to Stockmeyer's results, a recent work of Dell and Lapinskas [15] provides a framework that relates edge estimation using BIS and edge existence queries to a question in fine-grained complexity. They study the relationship between decision and counting versions of problems such as 3SUM and Orthogonal Vectors. They proved that for a bipartite graph, using O (ε −2 log 6 n) BIS queries, and ε −4 n polylog(n) edge existence queries, one can output a number m such that with probability at least 1 − 1/n 2 , we have (1 − ε)m ≤ m ≤ (1 + ε)m.
Dell and Lapinskas [15] used edge estimation to obtain approximate counting algorithms for problems in fine-grained complexity. For instance, given an algorithm for 3SUM with runtime T , they obtain an algorithm that estimates the number of YES instances of 3SUM with runtime O (T ε −2 log 6 n) + ε −4 n polylog(n). The relationship is simple. The decision version of 3SUM corresponds to checking if there is at least one edge in a certain bipartite graph. The counting version then corresponds to counting the edges in this graph. We note that in their application, the large number O (n polylog(n)) of edge existence queries does not affect the dominating term in the overall time in their reduction; the larger term in the time is a product of the time to decide 3SUM and the number of BIS queries. 1 + ε (n 2 /m) poly(log n, 1/ε ) Folklore (see Section 5.2) Degree 2 + ε √ n log n/ε [23] Degree + neighbor 1 + ε √ n poly(log n, 1/ε ) [25] Subset 1 + ε poly(log n, 1/ε ) [22,34] BIS 1 + ε npoly(log n, 1/ε ) [15] BIS 1 + ε poly(log n, 1/ε ) This Work IS 1 + ε min( √ m, n 2 /m) poly(log n, 1/ε ) This Work The bounds stated are for high probability results, with error probability at most 1/n. Constant factors are suppressed for readability.
Our Results
We describe two new algorithms. Let G = ( n , E) be a simple graph with m = |E| edges.
The bipartite independence oracle. We present an algorithm that uses BIS queries and computes an estimate m for the number of edges in G such that (1 − ε)m ≤ m ≤ (1 + ε)m. The algorithm performs O (ε −4 log 14 n) BIS queries and succeeds with high probability (see Theorem 4.9 for a precise statement). Ignoring the cost of the queries, the running time is near linear (we mostly ignore running times in this article because query complexity is our main resource). Since polylog(n) BIS queries can simulate a degree query (see Section 4.4), one can obtain a (2 + ε)-approximation of m by using Feige's algorithm [23], which uses degree queries. This gives an algorithm that uses O ( √ n polylog(n)/poly(ε)) BIS queries. Our new algorithm provides significantly better guarantees in terms of both the approximation and number of BIS queries.
The result is somewhat more general than stated previously. One can use the algorithm to estimate the number of edges in any induced subgraph of the original graph. Similarly, one can estimate the number of edges in the graph between any two disjoint subsets of vertices U , V ⊆ n . In other words, the algorithm can estimate the size of Compared to the result of Dell and Lapinskas [15], our algorithm uses exponentially fewer queries since we do not spend n polylog(n) edge existence queries. Our improvement does not seem to imply anything for their applications in fine-grained complexity. We leave open the question of finding problems where a more efficient BIS algorithm would lead to new decision versus counting complexity results.
The ordinary independence oracle. We also present a second algorithm, using only IS queries to compute a (1 + ε)-approximation. It performs O (ε −4 log 5 n + min(n 2 /m, √ m) · ε −2 log 2 n)IS queries (see Theorem 5.8). In particular, the number of IS queries is bounded by O (ε −4 log 5 n + ε −2 n 2/3 log 2 n). The first term in the minimum (i.e., ≈ n 2 /m) comes from a folklore algorithm for estimating set cardinality using membership queries (see Section 2.3). The second term in the minimum (i.e., ≈ √ m) is the number of queries used by our new algorithm. We observe that BIS queries are surprisingly more effective for estimating the number of edges than IS queries. Shedding light on this dichotomy is one of the main contributions of this work.
Comparison with other queries. Table 1 summarizes the results for estimating the number of edges in a graph in the context of various query types. Given some of the results in Table 1 on edge estimation using other types of queries, a natural question is how well BIS and IS queries can simulate such queries. In Section 4.4, we show that O (ε −2 log n) BIS queries are sufficient to simulate degree queries. However, we do not know how to simulate a neighbor query (to find a specific neighbor) with few BIS queries, but a random neighbor of a vertex can be found with O (log n) BIS queries (see the work of Ben-Eliezer et al. [8]). For IS queries, it turns out that estimating the degree of a vertex v up to a constant factor requires at least Ω(n/deg(v)) IS queries (see Section 5.3).
Notation. Throughout, log and ln denote the logarithm taken in base 2 and e, respectively. For integers, u, k, let k = {1, . . . , k } and u : k = {u, . . . , k }. The notation x = polylog(n) means x = O (log c n) for some constant c > 0. A collection of disjoint sets U 1 , . . . ,U k such that i U i = U , is a partition of the set U , into k parts (a part U i might be an empty set). In particular, a (uniformly) random partition of U into k parts is chosen by coloring each element of U with a random number in k and identifying U i with the elements colored with i.
Throughout, we use G = ( n , E) to the denote the input graph. The number of edges in G is denoted by m = |E|. For a set U ⊆ n , let E (U ) = {uv ∈ E | u, v ∈ U } be the set of edges between vertices of U in G. For two disjoint sets U , V ⊆ n , let E(U , V ) denote the set of edges between U and V : Let m(U ) and m(U , V ) denote the number of edges in E (U ) and E(U , V ), respectively. We also abuse notation and let m(H ) be the number of edges in a subgraph H (e.g., m(G) = m).
High probability conventions. Through the article, the randomized algorithms presented would succeed with high probability-that is, with probability ≥ 1 − 1/n Ω(1) . Formally, this means the probability of success is ≥ 1 − 1/n c , for some arbitrary constant c > 0. For all of these algorithms, the value of c can be increased to any arbitrary value (i.e., improving the probability of success of the algorithm) by increasing the asymptotic running time of the algorithm by a constant factor that depends only on c. For the sake of simplicity of exposition, we do not explicitly keep track of these constants (which are relatively well behaved).
Overview of the Algorithms
1.3.1 The BIS Algorithm. Our discussion of the BIS algorithm follows Figure 1, which depicts the main components of one level of our recursive algorithm. Our algorithms rely on several building blocks, as described next.
Exactly count edges. One can exactly count the edges between two subsets of vertices, with a number of queries that scales nearly linearly in the number of such edges. Specifically, a simple deterministic divide and conquer algorithm to compute m(U , V ) using O (m(U , V ) log n)BIS queries is described later in Lemma 4.1.
Sparsify. The idea is now to sparsify the graph in such a way that the number of remaining edges is a good estimate for the original number of edges (after scaling). Consider sparsifying the graph by coloring the vertices of graph and only looking at the edges going between certain pairs of color classes (in our algorithm, these pairs are a matching of the color classes). We prove that it suffices to only count the edges between these color classes, and we can ignore the edges with both endpoints inside a single color class.
For any k satisfying 1 ≤ k ≤ n/2 , let U 1 , . . . ,U k , V 1 , . . . ,V k be a uniformly random partition of n . Then, we have where c is some constant. For the proof of this inequality, see Section 3. Specifically, if we set G i to be the induced bipartite subgraph on U i and V i , then 2k i m(G i ) is a good estimate for m(G). In the first step, we color the vertices and sparsify the graph by only looking at the edges between vertices of the same color. In the second step, we coarsely estimate the number of edges in each colored subgraph. Next, we group these subgraphs based on their coarse estimates, and we subsample from the groups with a relatively large number of edges. In the final step, we exactly count the edges in the sparse subgraphs, and we recurse on the dense subgraphs. Now the graph is bipartite. The preceding sparsification method implies that we can assume without loss of generality that the graph is bipartite. Indeed, invoking the lemma with k = 1, we see that estimating the number of edges between the two color classes is equivalent to estimating the total number of edges, up to a factor of 2. For the rest of the discussion, we will consider colorings that respect the bipartition.
Coarse estimator. We give an algorithm that coarsely estimates the number of edges in a (bipartite) subgraph, up a O (log 2 n) factor, using only O (log 3 n) BIS queries.
The subproblems. After coloring the graph, we have reduced the problem to estimating the total number of edges in a collection of (disjoint) bipartite subgraphs. However, certain subgraphs may still have a large number of edges, and it would be too expensive to directly use the exact counting algorithm on them.
Reducing the number of subgraphs in a collection via importance sampling. Using the coarse estimates, we can form O (log n) groups of bipartite subgraphs, where each group contains subgraphs with a comparable number of edges. For the groups with only a polylogarithmic number of edges, we can exactly count edges using polylog(n) BIS queries via the exact count algorithm mentioned earlier. For the remaining groups, we subsample a polylogarithmic number of subgraphs from each group. This new estimate is a good approximation to the original quantity, with high probability. This corresponds to the technique of importance sampling that is used for variance reduction when estimating a sum of random variables that have comparable magnitudes.
Sparsify and reduce. We use the sparsification algorithm on each graph in our collection. This increases the number of subgraphs while reducing (by roughly a factor of k) the total number of edges in these graphs. The number of edges in the new collection is a reliable estimate for the number of edges in the old collection. We will choose k to be a constant so that every sparsification round reduces the number of edges by a constant factor.
If the number of graphs in the collection becomes too large, we reduce it in one of two ways. For the subgraphs with relatively few edges, we exactly count the number of edges using only polylog(n) queries. For the dense subgraphs, we can apply the preceding importance sampling technique and retain only polylog(n) subgraphs. Every basic operation in this scheme requires polylog(n) BIS queries, and the number of subgraphs is polylog(n). Therefore, a round can be implemented using polylog(n) BIS queries. Now, since every round reduces the number of edges by a constant factor, the algorithm terminates after O (log n) rounds, resulting in the desired estimate for m using only polylog(n) queries in total. Figure 1 depicts the main components of one round.
We have glossed over some details regarding the reweighting of intermediate estimates, as both the sparsfication and importance sampling steps involve subsampling and rescaling. To handle this, the algorithm will maintain a weight value for each subgraph in the collection (starting with unit weight). Then, these weights will be updated throughout the execution, and they will be used during coarse estimation. For the final estimate, the algorithm will output a weighted sum of the estimates for the remaining subgraphs in addition to the weighted version of the exactly counted subgraphs. By using these weights to properly rescale estimates and counts, the algorithm will achieve a good estimate for m with high probability.
The IS Algorithm.
We move on to describe our second algorithm, based on IS queries. As with the BIS algorithm, the main building block for the IS algorithm is an efficient way to exactly count edges using IS queries. The exact counting algorithm works by first breaking the vertices of the graph into independent sets in a greedy fashion and then grouping these independent sets into larger independent sets using (yet again) a greedy algorithm. The resulting partition of the graph into independent sets has the property that every two sets have an edge between them, and this partition can be computed using a number of queries that is roughly m. This is beneficial, because when working on the induced subgraph on two independent sets, the IS queries can be interpreted as BIS queries. As such, edges between parts of the partition can be counted using the exact counting algorithm, modified to use IS queries. The end result is that for a given set U ⊆ n , one can compute m(U ), the number of edges with both endpoints in U , using O (m(U ) log n) IS queries. This algorithm is described in Section 5.1. Now, we can sparsify the graph to reduce the overall number of IS queries. In contrast to the BIS queries, we do not know how to design a coarse estimator using only IS queries (see Section 5.3). This prohibits us from designing a similar algorithm. Instead, we estimate the number of edges in one shot by coloring the graph with a large number of colors and estimating the number of edges going between a matching of the color classes. This is somewhat counterintuitive. An initial sparsification attempt might be to count only the edges going between a single pair of colors. If the total number of colors is 2k, then we expect to see m/ 2k 2 edges between this pair. Therefore, we could set k to be large and invoke Lemma 5.3. Scaling by a factor of 2k 2 , we would hope to get an unbiased estimator for m.
Unfortunately, a star graph demonstrates that this approach does not work, due to the large variance of this estimator. If we randomly color the vertices of the star graph with 2k colors, then out of the 2k 2 pairs of color classes, only 2k − 1 pairs have any edge going between color classes. Thus, if we only choose one pair of color classes, then with high probability one of the following two cases occurs: either (i) there is no edge crossing the color pair or (ii) the number of edges crossing the pair is ≈ m/2k. In both cases, our estimate after scaling by a factor of 2k 2 will be far from the truth.
At the other extreme, most edges will be present if we look at the edges crossing all pairs of color classes. Indeed, the only edges we miss have both endpoints in a color class, and this accounts for only a 1/2k fraction of the total number of edges. Thus, this does not achieve any substantial sparsification.
By using a matching of the color classes, we simultaneously get a reliable estimate of the number of edges and a sufficiently sparsified graph (see Lemma 3.2). Let U 1 , . . . ,U k , V 1 , . . . ,V k be a random partition of the vertices into 2k color classes. This implies that with high probability, the estimator Hence, as long as we choose k to be less than ε √ m/polylog(n), we approximate m up to a factor of (1 + O (ε)). We use geometric search to find such a k efficiently.
To get a bound on the number of IS queries, we claim that we can compute k i=1 m(U i , V i ) using Lemma 5.3, with a total of (k + m k ) polylog(n) IS queries. The first term arises since we have to make at least one query for each of the k color pairs (even if there are no edges between them). For the second term, we pay for both (i) the edges between the color classes and (ii) the total number of edges with both endpoints within a color class (since the number of IS queries in Lemma 5.3 scales with m(U ∪ V )). By the sparsification lemma, we know that (i) is bounded by O (m/k ) with high probability and we can prove an analogous statement for (ii). Hence, plugging in a value of k ≈ ε √ m/polylog(n), the total number of IS queries is bounded by √ m polylog(n)/ε.
Subsequent Work After Initial Publication
After the initial publication of our results [7], there has been some follow-up work [9,10,13,16]. Answering one of the open questions of our earlier work [7], Chen et al. [13] provide nearlymatching upper and lower bounds on the number of IS queries for edge estimation. More precisely, they show that O (min(n/ √ m, √ m ) · poly(log(n), 1/ε))IS queries are sufficient (the term n 2 /m is the new result). They also prove that Ω min(n/ √ m, √ m )/polylog(n)) IS queries are necessary for a certain family of graphs.
Dell et al. [16] provide new connections between decision and approximate counting results for problems such as k-SUM, k-Orthogonal-Vectors, and k-Clique by relating the complexity to edge estimation using certain queries. In particular, their work extends the previous work of Dell and Lapinskas [15] to the case of k-hypergraphs, and they consider a generalization of BIS queries to k-partite set queries. As one of their technical results, they improve the dependence on ε in Theorem 4.9 from ε −4 down to ε −2 .
Bhattacharya et al. [9,10] also consider the generalization of BIS queries to tripartite set queries, where they use such queries to estimate the number of triangles in a graph.
Outline
The rest of the article is organized as follows. We start in Section 2 by reviewing some necessary tools-concentration inequalities, importance sampling, and set size estimation via membership queries. In Section 3, we prove our sparsification result (Lemma 3.2).
In Section 4, we describe the algorithm for edge estimation for the BIS case. Section 4.1 describes the exact counting algorithm. In Section 4.2, we present the algorithm that uses BIS queries to coarsely estimate the number of edges between two subsets of vertices (Lemma 4.8). We combine these building blocks to construct our edge estimation algorithm using BIS queries in Section 4.3.
The case of IS queries is tackled in Section 5. In Section 5.1, we formally present the algorithms to exactly count edges between two subsets of vertices (Lemma 5.3). In Section 5.2, we present our algorithm using IS queries. In Section 5.3, we provide some discussion of why the IS case seems to be harder than the BIS case. We conclude in Section 6 and discuss open questions.
PRELIMINARIES
Here we present some standard tools that we need later on.
Concentration Bounds
For proofs of the following concentration bounds, see the book by Dubhashi and Panconesi [18].
, let and u be real numbers such that ≤ μ ≤ u. Then, we have that We need a version of Azuma's inequality that takes into account a rare bad event-the following is a restatement of Theorem 8.3 from Chung and Lu [14] in a simplified form (which sufficient for our purposes).
Lemma 2.3 ([14]
). Let f be any function of r independent random variables Y 1 , . . . , Y r , and . . , c r are some nonnegative numbers. Let B be the event that a bad sequence happened, and let
Importance Sampling
Importance sampling is a technique for estimating a sum of terms. Assume that for each term in the summation, we can cheaply and quickly get an initial, coarse estimate of its value. Furthermore, assume that better estimates are possible but expensive. Importance sampling shows how to sample terms in the summation, then acquire a better estimate only for the sampled terms, to get a good estimate for the full summation. In particular, the number of samples is bounded independently of the original number of terms, depending instead on the coarseness of the initial estimates, the probability of success, and the quality of the final output estimate. Lemma 2.4 (Importance Sampling). Let U = {u 1 , . . . ,u r } be a set of numbers, all contained in the interval [α/b, αb], for α > 0 and b ≥ 1. Let γ , ε > 0 be parameters. Consider the sum Γ = r i=1 u i . For an arbitrary t ≥ b 4 2ε 2 (1 + ln 1 γ ), and i = 1, . . . , t, let X i be a random sample chosen uniformly (and independently) from the set U (i.e., let j i be uniformly and randomly picked from r , and let The preceding lemma enables us to reduce a summation with many numbers into a much shorter summation (while introducing some error, naturally). The list/summation reduction algorithm we need is described next.
To this end, we have parameters ξ > 0, γ , b, and M such that: Then, one can compute a new (hopefully shorter) sequence of triples (H 1 , w 1 , e 1 ), . . . , (H t , w t , e t ) (the new sequence is a subsequence of the original sequence with reweighting). The new sequence complies with the preceding conditions, and furthermore, the estimate Proof. We break the interval [1, M] into log M intervals in the natural way, where the j th in- Let For all j ∈ h , let Γ j = (H,w,e ) ∈U j w · w(H ) be the total weight of structures in the j th group. By Lemma 2.4, we have, with probability Summing these inequalities over all j ∈ h , implies that Y is the desired approximation with probability ≥ 1 − γ .
Specifically, the output sequence is constructed as follows. For all j ∈ h , and for every triple (H , w, e) ∈ R j , we add (H , w · W j , e) to the output sequence. Clearly, the output sequence has
Estimating Subset Size via Membership Oracle Queries
We present here a standard tool for estimating the size of a subset via membership oracle queries. This is well known, but we provide the details for the sake of completeness. Lemma 2.7. Consider two (finite) sets B ⊆ U , where n = |U |. Let ε ∈ (0, 1) and γ ∈ (0, 1/2) be parameters. Let д > 0 be a user-provided guess for the size of |B|. Consider a random sample R, taken with replacement from U , of size r = c 5 ε −2 (n/д) log γ −1 , where c 5 is sufficiently large. Next, consider the estimate Y = (n/r )|R ∩ B| to |B|. Then, we have the following: Both of the preceding statements hold with probability ≥ 1 − γ .
, and this is ≤ γ for c 5 a sufficiently large constant.
(B) We have two cases to consider. For the first case, suppose that |B| < д/4. In this case, if X = r i=1 X i is the random variable as described part (A), then each X i is an indicator variable with probability p = |B|/n < д/(4n) and by Chernoff's inequality (Lemma 2.2(C)) and again this is ≤ γ for c 5 which is ≤ γ /2 for c 5 ≥ 24 ln 2, as γ ≤ 1/2. Adding these two failure probabilities together gives a bound of at most γ as required.
Lemma 2.8. Consider two sets B ⊆ U, where n = |U |. Let ξ , γ ∈ (0, 1) be parameters such that γ < 1/ log n. Assume that one is given an access to a membership oracle that, given an element x ∈ U , returns whether or not x ∈ B. Then, one can compute an estimate s such that Proof. Let д i = n/2 i+2 . For i = 1, . . . , log n, use the algorithm of Lemma 2.7 with ε = 0.5, with the probability of failure being γ /(8 log n), and let Y i be the returned estimate. The algorithm stops this loop as soon as Y i ≥ 4д i . Let I be the value of i when the loop stopped. The algorithm now calls Lemma 2.7 again with д I and ε = ξ , and returns the value of Y , as the desired estimate.
Overall, for T = 1 + log n , the preceding makes T calls to the subroutine of Lemma 2.7, and the probability that any of them to fail is Tγ /(8 log n) < γ . Assume that all invocations of Lemma 2.7 were successful. In particular, Lemma 2.7 guarantees that if Y > 4д I ≥ д I /2, then the estimate returned is (1 ± ε)-approximation to the desired quantity.
Estimating Subset Size via Emptiness Oracle Queries.
Consider the variant where we are given a set X ⊆ U . Given a query set Q ⊆ U , we have an emptiness oracle that tells us whether Q ∩ X is empty. Using an emptiness oracle, one can get a (1 ± ε)-approximate the size of X using relatively few queries. The following result is implied by the work of Aronov and Har-Peled [6, Theorem 5.6] and Falahatgar et al. [22]-the latter result has better bounds if the failure probability is not required to be polynomially small. Lemma 2.9 ( [6,22]). Consider a set X ⊆ U , where n = |U |. Let ε ∈ (0, 1) be a parameter. Assume that one is given an access to an emptiness oracle that, given a query set Q ⊆ U , returns whether or not X ∩ Q ∅. Then, one can compute an estimate s such that (1 − ε)|X | ≤ s ≤ (1 + ε)|X |, using O (ε −2 log n) emptiness queries. The returned estimate is correct with probability ≥ 1 − 1/n Ω(1) .
We sketch the basic idea of the algorithm used in the preceding lemma. For a guess д of the size of X , consider a random sample Q where every element of U is picked with probability 1/д. The probability that Q avoids X is α (д) = (1 − 1/д) |X | . The function α (д) is (i) monotonically increasing, (ii) close to zero when д |X |, (iii) ≈ 1/e for д = |X |, and (iv) close to 1 if д |X |. One can estimate the value α (д) by repeated random sampling and checking if the random sample intersects X using emptiness queries. Given such an estimate, one can then perform an approximate binary search for the value of д such that α (д) = 1/e, which corresponds to д = |X |. See the work of Arnov and Har-Peled [6] and Falahatgar et al. [22] for further details.
EDGE SPARSIFICATION BY RANDOM COLORING
In this section, we present and prove that coloring vertices and counting only edges between specific color classes provides a reliable estimate for the number of edges in the graph. This is distinct from standard graph sparsification algorithms, which usually sparsify the edges of the graph directly (usually by sampling edges).
We need the following technical lemma.
Lemma 3.1. Let C be a set of r elements, colored randomly by k colors-specifically, for every element x ∈ C, one chooses randomly (independently and uniformly) a color for it from the set k . For i ∈ k , let n i be the number of elements of C with color i. Let n be a positive integer and c > 1 be an arbitrary constant. Then, Proof. (A) For ∈ r , let X be the indicator variable that is 1 with probability 1/k and 0 otherwise. For X = r =1 X , notice that n i is distributed identically to X and that E[X ] = E[n i ] = r /k. Using Chernoff's inequality (Lemma 2.2(A)), we have
Lemma 3.2. (A)
There exists an absolute constant ς such that the following holds. For every n, let G = ( n , E) be a graph with m edges. For any 1 ≤ k ≤ n/2 , let U 1 , . . . ,U 2k be a uniformly random partition of n . Then, (B) There exists an absolute constant ς such that the following holds. Similarly, for every n, disjoint sets U , V ⊆ n , and k such that 2 ≤ k ≤ max{|U |, |V |}, let U 1 , . . . ,U k , V 1 , . . . ,V k be uniformly random partitions of U and V , respectively. Then, Proof. (A) Consider the random process that colors vertex t, at time t ∈ n , with a uniformly random color Y t ∈ 2k . The colors correspond to the partition of n into classes U 1 , . . . ,U 2k . Define The probability of a specific edge uv to be counted by f is 1/(2k ). Indeed, fix the color of u, and observe that there is only one choice of the color of v, such that uv would be counted. As such, Consider the Doob martingale X 0 , We are interested in bounding the quantity |X t − X t −1 |. To this end, fix the value of Y t −1 , and let We have that Let N (t ) be the set of neighbors of t in the graph and deg(t ) = |N (t )| be the degree of t. Let N <t = N (t ) ∩ t − 1 and N >t = N (t ) ∩ t + 1 : n be the before/after set of neighbors of t, respectively. Let C i <t (respectively, C i >t ) be the number of neighbors of t in N <t (respectively, N >t ) colored with color i. For a color i ∈ 2k , let π (i) = 1 + ((k + i − 1) mod 2k ) be its matching color.
Fix two distinct colors i, j ∈ 2k , and let To see why the preceding is true, observe that any edge involving two vertices in t − 1 has the same contribution to д(i) and д(j). Similarly, an edge with a vertex in t − 1 , and a vertex in t + 1 : n , has the same contribution to both terms. The same argument holds for an edge involving vertices with indices strictly larger than t. As such, only the edges adjacent to t have a different contribution, which is as stated. Rearranging, we have by Lemma 3.1 with C = N (t ) and r = deg(t ), with probability at least 1 − β for β = 4/n c for any constant c > 1, that . Furthermore, we have that k · Γ is a (1 ± ξ )approximation to m(G), where ξ = (ςk √ m log n)/m, with high probability. For our purposes, we need Setting k = 4, the preceding implies that one can apply the refinement algorithm of Lemma 3.2 if m = Ω(ε −2 log 4 n). With high probability, the number of edges in the new k subgraphs (i.e., Γ), scaled by k, is a good estimate (i.e., within a 1 ± ε/(8 log n) factor) for the number of edges in the original graph, and furthermore, the number of edges in the new subgraphs is small (formally, E[Γ] ≤ m/4, and with high probability Γ ≤ m/2).
EDGE ESTIMATION USING BIS QUERIES
Here we show how to get exact and approximate count for the number of edges in a graph using BIS queries (see also [5,30]). Proof. We use a recursive divide-and-conquer approach, which intuitively builds a quadtree over the pair (U , V ). Specifically, consider the incidence matrix M of size |U | × |V |, where a column corresponds to an element of V and a row to an element of U . An entry in the matrix is equal to 1 if there is an edge between the corresponding nodes in the original graph, and it is zero otherwise. The task at hand as such is to count the number of ones in the matrix. A BIS query then corresponds to deciding if an induced submatrix is all zero. We now conceptually build a tree (i.e., a quadtree), by partitioning the matrix into four submatrices of the same dimensions (in the natural way), and recursively build a quadtree for each submatrix. Intuitively, the algorithm counts the 1s in the matrix, by tracking each of the 1s to their corresponding leaf node in the quadtree.
Exactly Counting Edges Using BIS Queries
To this end, the algorithm first issues the query BIS (U , V ). If the return value is false, then there are no edges between U and V , and the algorithm sets m(U , V ) to zero, and returns. If |U | = |V | = 1, then this also determines if m(U , V ) is 0 or 1 in this case, and the algorithm returns. The remaining case is that m(U , V ) 0, and the algorithm recurses on the four children of (U , V ), which will correspond to the pairs (U 1 , V 1 ), (U 1 , V 2 ), (U 2 , V 1 ), and (U 2 , V 2 ), where U 1 , U 2 and V 1 , V 2 are equipartitions of U and V , respectively. We are using here the identity If m(U , V ) = 0 holds, then the number of queries is exactly equal to 1, and the lemma is true in this case. For the rest of the proof, we assume that m(U , V ) ≥ 1. To bound the number of queries, imagine building the whole quadtree for the adjacency matrix of U × V with entries for E (U , V ). Let X be the set of 1 entries in this matrix, and let k = |X | (i.e., X corresponds to set of leaves that are labeled 1 in the quadtree). The height of the quadtree is h = O (max{log |U |, log |V |}). Let X 1 be the set of nodes in the quadtree that are either in X or are ancestors of nodes of X . It is not hard to verify that |X 1 | = O (k + k log(|U ||V |)) = O (k log n). Finally, let X 2 be the set of nodes in the quadtree that are either in X 1 or their parent is in X 1 . Clearly, the algorithm visits only the nodes of X 2 in the recursion, thus implying the desired bound.
As for the budgeted version, run the algorithm until it has accumulated T = O (t/ log n) edges in the working set, where T > t. If this never happens, then the number of edges of the graph is at mostT , as desired, and the preceding analysis applies. Otherwise, the algorithm stops, and applying the same argument as before, we get that the number of BIS queries is bounded by O (T log n) = O (t ).
Remark. The number of BIS queries made by the algorithm of Lemma 4.1 is at least max{m(U , V ), 1}, since every edge with one endpoint in U and the other in V is identified (on its own, explicitly) by such a query.
We note that we can use the above algorithm to exactly identify the edges of an arbitrary graph using BIS queries with a cost of O (log n) overhead per edge (see also [5,30]). However, we will not need to do so in the sequel. We remain with the task of computing Z . Let U 0 = n . For i = 1, . . . ,T = log 2 n , let A i be the elements of U i−1 whose i th bit in their binary representation is 1.
Observe that every edge e in G has an index i such that its two vertices differ in the i th bit. Note that either one of the endpoints of e was already added to Z before the i th iteration, or it would be discovered and its endpoints added to Z in the i th iteration. As such, the set Z is computed correctly. Since E 1 , . . . , E T are disjoint sets, it follows that computing For the budgeted version, we run the algorithm until τ = Ω(t ) BIS queries have been performed. If this does not happen, then the graph has at most τ edges, and they were reported by the algorithm. Otherwise, we know that the graph must have at least τ / log n edges, as desired.
The Coarse Estimator Algorithm
Let G = G ( n , E) be a graph and let U , V ⊆ n be disjoint subsets of the vertices. The task at hand is to estimate m(U , V ), using polylog BIS queries.
For a subset S ⊆ n , define N (S ) to be the union of the neighbors of all vertices in S. For a vertex v, let deg S (v) denote the number of neighbors of v that lie in S. For i ∈ log n , define the set of vertices in U with degree between 2 i and 2 i+1 as , the first inequality is stating that there is a term as large as the average. As for the second inequality, observe that for every i, Suppose that we have an estimate e for the number of edges between U and V in the graph. Consider the test CheckEstimate, depicted in Algorithm 4.1, for checking if the estimate e is correct up to polylogarithmic factors using a logarithmic number of BIS queries. .
By a union bound over the loop variable values, the probability that the test accepts is at most 1/4. (B)
It is enough to show that the probability is at least 1/2 when the loop variable attains the value α given by Claim 4.5. In this case, we have that |U α | ≥ m(U ,V ) 2 α +1 (log n+1) , and thus since n ≥ 16. Furthermore, since deg V (u) ≥ 2 α for all u ∈ U α , it follows that when U ∩ U α ∅, then |N (U ∩ U α )| ≥ 2 α . Thus, we can bound From the preceding, we get Armed with the preceding test, we can easily estimate the number of edges up to a O (log n) factor by doing a search, where we start with e = n 2 and halve the number of edges each iteration. The algorithm is depicted in Algorithm 4.2. Proof. For any fixed value of the loop variable j such that 2 j ≥ 4m(U , V )(log n + 1), the expected number of accepts is at most t/4 using Claim 4.6(A), where t = 128 log n. The probability that we see at least 3t/8 = t/4 + t/8 accepts is bounded by exp(−2(t/8) 2 /t ) = exp(−t/32) ≤ n −4 by Chernoff's inequality (Lemma 2.2(A)). Taking the union over all values of j, the probability that the algorithm returns 2 j , when 2 j ≥ 4m(U , V )(log n + 1), is at most 2n −4 log n.
However, when 2 j ≤ m(U , V )/(4 log n), the expected number of accepts is at least t/2, by Claim 4.6(B), and so the probability that we see at least 3t/8 = t/2 − t/8 accepts is at least 1 − exp(−2t/8 2 ) ≥ 1 − n −4 by Chernoff's inequality (Lemma 2.2(A)). Hence, conditioned on the event that the algorithm has not already returned a bigger value of j, the probability that we accept for the unique j that satisfies m Overall, by a union bound, the probability that the estimator outputs an estimate e that does not satisfy (8 log n) −1 ≤ e/m(U , V ) ≤ 8 log n is at most 4n −4 log n. The number of BIS queries is bounded by O (log 3 n), since for each value of j there are t = 128 log n trials of CheckEstimate, each of which makes log n + 1 queries to the BIS oracle.
Summarizing the preceding, we get the following result Lemma 4.8. For n ≥ 16, and arbitrary U , V ⊆ n that are disjoint, the randomized algorithm CoarseEstimator(U , V ) makes at most c ce log 3 n BIS queries (for a constant c ce ) and outputs e ≤ n 2 such that with probability at least 1 − 4n −4 log n, we have (8 log n) −1 ≤ e/m(U , V ) ≤ 8 log n.
The Overall BIS Approximation Algorithm
Given a graph G = ( n , E), next we describe an algorithm that makes polylog(n)/ε 4 BIS queries to estimate the number of edges in the graph within a factor of (1 ± ε).
The algorithm for estimating the number of edges in the graph is going to maintain a datastructure D containing the following: (A) An accumulator φ-this is a counter that maintains an estimate of the number of edges already handled.
The estimate based on D of the number of edges in the original graph G = ( n , E) is The number of active edges in D is m active Next, the algorithm uses the summation reduction algorithm of Lemma 2.5 applied to the list of triples in D, with ξ = ε/(8 log n). This reduces the number of triples in D to be at most L len while introducing a multiplicative error of (1 ± ξ ).
Analysis. Number of iterations.
Initially, the number of active edges is at most m. Every time Refine is executed, this number reduces by a factor of 2 with high probability using Lemma 3.2(B) (in expectation, the reduction is by a factor of 4). As such, after log m ≤ log( n 2 ) ≤ 2 log n iterations there are no active edges, and then the algorithm terminates.
Number of BIS queries. Clearly, because Reduce is used on D in each iteration, the algorithm maintains the invariant that the number of triples in D is at most O (L len ), where L len = O (ε −2 log 8 n) as specified by Remark 2.6.
The procedure Cleanup applies the algorithm of Lemma 4.1 to decide whether a triple in the list has at least 2L small edges associated with it, or fewer edges, where L small = Θ(ε −2 log 4 n) (see Equation Inside each iteration, Cleanup introduces no error. By the choice of parameters, Refine introduces a multiplicative error that is at most 1 ± ξ (see Remark 3.3). Similarly, Reduce introduces a multiplicative error bounded by 1 ± ξ (see Remark 2.6). As such, the multiplicative approximation of the algorithms lies in the interval since (1 − ε/(8 log n)) 1+2 log n ≥ 1 − ε and (1 + ε/(8 log n)) 1+2 log n ≤ 1 + ε as easy calculations show.
Probability of success. Throughout this analysis, c will be a constant that can be chosen to be arbitrarily large. The algorithm may fail due to the following reasons: (i) the random 2-coloring in step (B) gives an estimate that is far from its expectation (this probability is at most 1/n c using Lemma 3.2(A)), (ii) the Refine step fails (the probability for the failure of each iteration is at most 1/n c using Lemma 3.2(B)), (iii) the coarse estimate in Reduce step fails (the probability for the failure of each iteration is at most 1/n c using Claim 4.7), and last (iv) the summation reduction in the Reduce step fails (the probability for the failure of each iteration is at most 1/n c using Lemma 2.5). Overall, every step performed by the algorithm had probability at most 1/n c to fail. The algorithm performs O (polylog(n)) steps with high probability, which implies that the algorithm succeeds with probability at least 1 − 1/n O (1) .
Proof. Let N (v) = {i | vi ∈ E} be the set of neighbors of v, and let E v = {vi | vi ∈ E} be the corresponding set of edges. We have deg is equivalent to deciding if any of the edges adjacent to v is in E Q , and this is answered by the BIS query for ({v}, Q ). Namely, the BIS oracle can function as an emptiness oracle for N (v) ⊆ n . Now, using the algorithm of Lemma 2.9, we can (1 ± ε)-approximation |N (v)| using O (ε −2 log n) queries, as claimed.
EDGE ESTIMATION USING IS QUERIES
This section describes and analyzes our IS query algorithm (Theorem 5.8). At the end, we also discuss limitations of IS queries, suggesting that IS queries may indeed be weaker than BIS queries.
Exactly Counting Edges Using IS Queries
We start with an exact edge counting algorithm for IS querie (see also [5,30]). At a high-level, we use Lemma 4.1 after efficiently computing a suitable decomposition of our graph. Proof. Since U and V are disjoint and independent, we have that m(U ∪ V ) = m(U , V ). Furthermore, for any U ⊆ U and V ⊆ V , the query BIS(U , V ) is equivalent to the query IS(U ∪ V ). As such, we can use the algorithm of Lemma 4.1, using the IS queries as a replacement for the BIS queries, yielding the result. The next step is to break the set of interest U into independent sets. Proof. Order the elements of U = {u 1 , . . . ,u k } arbitrarily. The idea is to break U into independent sets, where each independent set is an interval I j = {u i j , u i j +1 , . . . ,u i j+1 −1 }. This can be done in a greedy fashion from left to right, discovering the index where an interval stops being an independent set. Assume inductively that one has computed the first j such independent intervals I 1 , . . . , I j , and also assume that I j ∪ {u i j+1 } is not an independent set. Next, using binary search on the range {i j+1 + 1, . . . , n}, find the maximal β such that {u i j+1 , . . . ,u β } is independent.
For any j, we have m(I j , I j+1 ) ≥ 1, which implies that the number of computed intervals τ satisfies τ ≤ m(U ) + 1. As such, this stage uses O ((1 + m(U )) log n)IS queries. This results in a decomposition of U into τ independent sets I 1 , . . . , I τ .
In the second stage, starting with the computed collection of independent sets, the algorithm greedily tries to merge sets. In each step, the algorithm takes two independent sets B,W in the current collection (for which it might be possible that their merged set is independent), and the algorithm uses an IS query to check whether B ∪ W is an independent set. If it is, then the algorithm merges the two sets into one independent set (replacing B,W by the set B ∪ W in the current collection of sets). Otherwise, the algorithm marks the two sets B and W as being incompatible with each other. Note that if B,W are incompatible, then for any B ⊇ B and W ⊇ W , the sets B and W are also incompatible. Namely, incompatibility is preserved under merger of independent sets, and the algorithm can keep track of the incompatible pairs under merger (importantly, a merger cannot decrease the number of incompatible pairs). The algorithm stops when all current sets are pairwise incompatible.
Each merge of two independent sets can be charged to the number of independent sets decreasing by one. Each pair of sets that is discovered to be incompatible can be charged to the edge witnessing that the merged set is not independent. Since every edge is only charged once by this process, it follows that the total number of IS queries performed by the second stage of the algorithm is at most τ + m(U ) ≤ 2m(U ) + 1.
The resulting collection of independent sets has the desired properties, completing the proof. Proof. Using the algorithm of Lemma 5.2, compute the decomposition of U into independent sets V 1 , . . . ,V t . By construction, for any i < j, we have that m(V i , V j ) ≥ 1, as some vertex of V i is connected to some vertex in V j . As such, going over all 1 ≤ i < j ≤ t, compute the set of edges E (V i , V j ) using the algorithm of Lemma 5.1. This requires O (m(V i , V j ) log n) IS queries. As such, the total number of IS queries used by this algorithm is O (m(U ) log n The budgeted version follows by running the algorithm until c log n IS queries have been performed, for c a sufficiently large constant. If this happens, then the number of edges in the graph is larger than t (as otherwise, the preceding implies that the algorithm would have already terminated), and the algorithm stops and outputs this fact.
Algorithms for Edge Estimation Using IS Queries
Our IS algorithm has two main subroutines. We first describe and analyze these, then we combine them for the overall algorithm, which is presented in Theorem 5.8.
Growing Search.
The following is an immediate consequence of Lemma 5.3. Proof. We color the vertices in U randomly using k = tε/(ς log n) colors for a constant ς to be specified shortly, and let U 1 , . . . ,U k be the resulting partition. By Lemma 3.2, we have for the and this holds with probability ≥ 1 − n −c 3 , where c 3 is an arbitrarily large constant, and ς is a constant that depends only on c 3 . For this to be a (1 ± ε)-approximation, we need that This in turn is equivalent to which holds because of the assumption that m(U ) ≥ max{L base , t 2 } in the statement.
To proceed, the algorithm starts computing the terms in the summation defining Γ, using the algorithm of Lemma 5.3. If at any point in time the summation exceeds M = 8(t 2 /k ) = O (ε −1 t log n), then the algorithm stops and reports that m(U ) > 2t 2 . Otherwise, the algorithm returns the computed count k · Γ as the desired approximation. In both cases, we are correct with high probability by Lemma 3.2.
We now bound the number of IS queries. If the algorithm computed Γ by determining exact edge counts for m(U i ) for all i ∈ k , then the number of queries would be k Proof. The algorithm starts by checking if the number of edges in m(U ) is at most L base = O (ε −4 log 4 n) using the algorithm of Lemma 5.4. Otherwise, in the i th iteration, the algorithm sets t i = √ 2t i−1 , where t 0 = √ L base , and invokes the algorithm of Lemma 5.5 for t i as the threshold parameter. If the algorithm succeeds in approximating the right size, we are done. Otherwise, we continue to the next iteration. Taking a union bound over the iterations, we have that the algorithm stops with high probability before t α > 4 m(U ). Let α be the minimum value for which this holds. The number of IS queries performed by the algorithm is , since this is a geometric sum.
Shrinking Search.
We are given a graph G = ( n , E) and a set U ⊆ n . The task at hand is to approximate m(U ). Let N = |U |.
Given an oracle that can answer IS queries, we can decide if a specific edge uv exists in the set E (U ) by performing an IS query on {u, v}. We can treat such IS queries as membership oracle queries in the set E of edges in the graph, where the ground set is the set of all possible edges Z = U 2 = {ij | i < j and i, j ∈ U }, where |Z | = N (N − 1)/2. Invoking the algorithm of Lemma 2.8 in this case, with γ = 1/n O (1) , implies a (1 ± ε)-approximation to m(U ) using O ((N 2 /m(U ))ε −2 log n) IS queries. For our purposes, however, we need a budgeted version of this. Lemma 5.7. Given parameters t > 0, ξ ∈ (0, 1], and a set U ⊆ n , with N = |U |, an algorithm can either (a) return m(U ) ≤ N 2 /(2t ) or (b) return (1 ± ξ )-approximation to m(U ). The algorithm uses O (t log n) IS queries in case (a) and O (tξ −2 log n) in case (b). The returned result is correct with high probability.
Proof. The idea is to use the sampling as done in Lemma 2.7, with д = N 2 /(16t ) and ε = 1/2 on the sets of edges E (U ) ⊆ U 2 . The sample R used is of size O ((N 2 /д) log n) = O (t log n), and we check for each one of the sampled edges if it is in the graph by using an IS query. If the returned estimate is at most д/2, then the algorithm returns that it is in case (a).
Otherwise, we invoke the algorithm of Lemma 2.7 again, with ε = ξ , to get the desired approximation, which is case (b). Combining the two bounds on the IS queries, we get that the i th iteration used O (t i ε −2 log 2 n) IS queries.
The Overall IS Search Algorithm.
The algorithm stopped in the i th iteration if t i ≥ √ m/2 or t i ≥ n 2 /m. In particular, for the stopping iteration I , we have t I = O (min( √ m, n 2 /m)). As such, the total number of IS queries in all iterations except the last one is bounded by O ( I i=1 t i ε −2 log 2 n) = O (t I ε −2 log 2 n). The stopping iteration uses O (t I ε −2 log 2 n) IS queries. Each bound holds with high probability, and a union bound implies the same for the final result. Corollary 5.9. For a graph G = ( n , E), with an access to G via IS queries, and a parameter ε > 0, one can (1 ± ε)-approximate m using O (ε −4 log 5 n + n 2/3 ε −2 log 2 n)IS queries.
Proof. Follows readily as min( √ m, n 2 /m) ≤ n 2/3 , for any value of m between 0 and n 2 .
Limitations of IS Queries
In this section, we discuss several ways in which IS queries seem more restricted than BIS queries.
Simulating degree queries with IS queries. A degree query can be simulated by O (log n) BIS queries (see Lemma 4.10). In contrast, here we provide a graph instance where Ω(n/deg(v)) IS queries are needed to simulate a degree query. In particular, we show that IS queries may be no better than edge existence queries for the task of degree estimation. Since it is easy to see that Ω(n/deg(v)) edge existence queries are needed to estimate deg(v), this lower bound also applies to IS queries.
For the lower bound instance, consider a graph that is a clique along with a separate vertex v whose neighbors are a subset of the clique. We claim that IS queries involving v are essentially equivalent to edge existence queries. Any edge existence query can be simulated by an IS query. However, any IS query on the union of v and at least two clique vertices will always detect a clique edge. Thus, the only informative IS queries involve exactly two vertices.
Coarse estimator with IS queries. It is natural to wonder if it is possible to replace the coarse estimator (Lemma 4.8) with an analogous algorithm that makes polylog(n) IS queries. This would immediately imply an algorithm making polylog(n)/ε 4 IS queries that estimates the number of edges. We do not know if this is possible, but one barrier is a graph consisting of a clique U on O ( √ m) vertices along with a set V of n − O ( √ m) isolated vertices. We claim that for this graph, the algorithm CoarseEstimator(U , V ) from Section 4.2, using IS queries instead of BIS queries, will output an estimate m that differs from m by a factor of Θ(n 1/3 ). Consider the execution of CheckEstimate(U , V , e) from Algorithm 4.1. A natural way to simulate this with IS queries would be to use an IS query on U ∪ V instead of a BIS query on (U , V ). Assume for the sake of argument that m = n 4/3 and |U | = √ m = n 2/3 . Consider when the estimate e satisfies e = cn 5/3 for a small constant c. In the CheckEstimate execution, there will be a value i = Θ(log n) such that with constant probability, U ⊆ U will contain at least two vertices and V ⊆ V will contain at least one vertex. In this case, m(U ∪ V ) 0 even though m(U , V ) = 0. Thus, using IS queries will lead to incorrectly accepting on such a sample, and this would lead to the CoarseEstimator outputting the estimate e = Θ(n 5/3 ) even though the true number of edges is m = n 4/3 .
CONCLUSION
In this article, we explored the task of using either BIS or IS queries to estimate the number of edges in a graph. We presented randomized algorithms giving a (1 + ε)-approximation using polylog(n)/ε 4 BIS queries and min{n 2 /(ε 2 m), √ m/ε} polylog(n)IS queries. Our algorithms estimate the number of edges by first sparsifying the original graph and then exactly counting edges spanning certain bipartite subgraphs. Next we describe a few open directions for future research.
Open Directions
Open questions include using a polylogarithmic number of BIS queries to estimate the number of cliques in a graph (see the work of Eden et al. [20] for an algorithm using degree, neighbor, and edge existence queries) or to sample a uniformly random edge (see the work of Eden and Rosenbaum [21] for an algorithm using degree, neighbor, and edge existence queries). In general, any graph estimation problems may benefit from BIS or IS queries, possibly in combination with standard queries (e.g., neighbor queries). Finally, it would be interesting to know what other oracles, besides subset queries, enable estimating graph parameters with a polylogarithmic number of queries.
|
v3-fos-license
|
2016-05-31T19:58:12.500Z
|
2013-01-16T00:00:00.000
|
18520334
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.spandidos-publications.com/etm/5/3/777/download",
"pdf_hash": "e2262f3f8cce2e9a970d9ca391c4f2ee5de18abe",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46434",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "e2262f3f8cce2e9a970d9ca391c4f2ee5de18abe",
"year": 2013
}
|
pes2o/s2orc
|
Octreotide and celecoxib synergistically encapsulate VX2 hepatic allografts following transcatheter arterial embolisation
To evaluate the encapsulation of VX2 hepatic allografts in rabbits induced by octreotide and celecoxib administration following transcatheter arterial embolisation (TAE), rabbits with hepatic VX2 allografts were divided into four groups: control, TAE, octreotide + celecoxib (O+C) and the multimodality therapy (TAE+O+C). Allograft metastasis, capsule thickness and percentage of clear cells were measured and vascular endothelial growth factor (VEGF) and CD31 were detected by immunohistochemistry and reverse transcription-polymerase chain reaction (RT-PCR) analysis. The extrahepatic metastases of each intervention group were significantly fewer than those of the control group, with the TAE+O+C group exhibiting the fewest extrahepatic metastases. The TAE+O+C group had the greatest proportion of clear cells and thickest capsule on day 30. Increased capsule thickness was negatively correlated with tumour metastasis. In addition, VEGF expression levels assessed by immunohistochemistry and RT-PCR in the three intervention groups were significantly lower than those in the control group. Furthermore, the TAE+O+C group had a significantly reduced CD31 count induced by TAE. These results demonstrate that TAE, followed by long-term administration of octreotide and celecoxib, synergistically inhibits VX2 hepatic allograft metastasis by increasing the proportion of clear cells, promoting encapsulation and inhibiting angiogenesis.
Introduction
The efficacy of therapies for hepatocellular carcinoma (HCC) is poor. Curative therapies, including resection, liver transplantation or percutaneous treatments benefit only 30% of patients (1). Even so, the majority of surgically treated patients show recurrence within 5 years of resection and this is linked to the high mortality of patients with resected HCC (2). Patients with large and multiple lesions exceeding the Milan criteria have been widely treated by transcatheter arterial embolisation (TAE) due to its precisely targeted, minimally invasive, repeatable and well-tolerated method. Although occlusion of tumour-feeding arteries may lead to extensive necrosis in vascularised HCC, hypoxia and ischemia of tumour tissue may produce large quantities of factors capable of inducing significant angiogenesis in the residual viable tumour, promoting recurrence and metastasis and consequently counteracting the efficacy of TAE (3,4).
Peri-procedural use of anti-angiogenic agents is recommended in order to overcome the disadvantages of TAE. However, the efficacy of those agents remains uncertain (5)(6)(7). The upregulation of cyclooxygenase-2 (COX-2), a key enzyme in arachidonic acid metabolism, is believed to be involved in hepatocarcinogenesis (8,9) and induce HCC angiogenesis via vascular endothelial growth factor (VEGF) (10,11), making COX-2 a rational therapeutic target for selective COX-2 inhibitors, including celecoxib. Somatostatin (SST) is one of the regulatory peptides for arresting the growth of HCC and the overexpression of SST receptors has also been identified in HCC (12). Our previous studies demonstrated that a combination of a COX-2 inhibitor with an SST analogue not only had an enhanced anti-proliferative effect and suppressed the metastasis of HCC in nude mice (13) but also prolonged the survival of rabbits with liver cancer that received TAE (14).
Various histopathological factors, including tumour size, tumour number, vascular invasion and tumour encapsulation, have been reported to be related to the prognosis of HCC. One study indicated that encapsulation is a favourable factor in large HCCs (>5 cm) and that encapsulation may act as a barrier to prevent the spread of tumour cells (15). However, few antitumour regimes stimulate the encapsulation of HCC. The current study aimed to evaluate the encapsulation of VX2 hepatic allografts in rabbits induced by octreotide and celecoxib administration following TAE.
Materials and methods
Animal experiments. All animal experiments were approved by the Institutional Animal Care and Use Committee of Sichuan University and were conducted according to local laws set by Sichuan University. Adult New Zealand White male rabbits weighing 2.3-2.5 kg were purchased from the Experimental Animal Centre of West China Medical Centre, Sichuan University. VX2 allograft-bearing rabbits were purchased from the Union Hospital, Huazhong University of Science and Technology (Wuhan, China).
The establishment of VX2 hepatic allografts in rabbits, TAE procedure and experimental grouping were the same as in our previous study (14). Briefly, 72 rabbits were randomly assigned into four groups. The VX2 tumours were orthotopically implanted into the livers of the rabbits. A total of 67 VX2 allograft-recipient rabbits were divided into 4 groups and treated as follows: i) control (n=18), the sham-operated animals received normal saline (NS) daily, intragastrically and subcutaneously; ii) TAE (n=17), the animals received the TAE procedure and then NS in the same way as the control group; iii) octreotide + celecoxib (O+C; n=16), the animals received sham surgery and then subcutaneous administration of octreotide (Novartis Diagnostics, Emeryville, CA, USA) at a dose of 37 µg/kg/day plus intragastric administration of celecoxib (Pfizer, New York, NY, USA) at a dose of 22.2 mg/kg/day and iv) multimodality therapy (TAE+O+C; n=16), the animals received TAE followed by subcutaneous administration of octreotide at a dose of 37 µg/kg/day plus intragastric administration of celecoxib at a dose of 22.2 mg/kg/day. Eight animals of each group were sacrificed after 30 days of treatment. The other animals were raised until spontaneous mortality or were sacrificed after 80 days of treatment.
Once the tumours were removed and weighed, metastatic foci were carefully searched for in organs. The tumour inhibition rate (%) = [(tumour weight of control -tumour weight of treatment group)/tumour weight of control] x 100. Tumour tissue was fixed in neutral buffered formalin for histological examination or 4% glutaraldehyde for transmission electron microscopy, or stored in -80˚C ultra-low freezer for reverse transcription-polymerase chain reaction (RT-PCR).
Morphological evaluation of VX2 allografts in rabbits.
Paraffin-embedded specimens were sliced into 5-µm sections and stained with haematoxylin and eosin (H&E) for histological evaluation in a single-blinded fashion. Clear cells in the VX2 allografts, due to cytoplasmic accumulation of glycogen and fat droplets that dissolved during the H&E staining process and left behind a 'clear' cytoplasm, were detected and counted in each tumour allograft (16). Encapsulation of the tumours was evaluated and the capsule thickness of complete capsules was measured in pixel pitches using Image-Pro Plus 6.0 software (Media Cybernetics, Rockville, MD, USA) and then was normalised into µm. Each value was the mean of five visual fields in which duplicate measurements were made.
Tumour specimens from each group were also immersed in 4% glutaraldehyde (pH 7.4) at 4˚C for 24 h, postfixed in 1% osmium tetroxide for 1 h and embedded in Epon 812 following dehydration. Following double staining with uranyl acetate and lead citrate, ultrathin sections (60 nm) were examined with a transmission electron microscope (H-600IV, Hitachi, Tokyo, Japan).
Immunohistochemistry for VEGF and CD31. Immunohistochemistry was performed on 5-µm paraffin-embedded tissue sections on poly-L-lysine coated glass slides. The sections were deparaffinised and treated with microwaves for 15 min. For non-specific blocking, 10% goat serum was added and incubated for 20 min at room temperature. Then the VEGF antibody (ab288775; Abcam, Cambridge, MA, USA) and the CD31 antibody (08-1425; Zymed Laboratories Inc., San Francisco, CA, USA) at a 1:250 dilution were added to the individual sections. Positive reactions were revealed by the streptavidin-biotin-peroxidase technique. Sections were incubated with 3,3'-diaminobenzidine (0.05% 3,3'-diaminobenzidine in 0.05 M Tris buffer, pH 7.6 and 0.01% hydrogen peroxide) and counterstained with Mayer's haematoxylin. Image-Pro Plus 6.0 software was used to score the integrated optical density (IOD) from the VEGF expression in the tumour cells and count the number of CD31 per visual field (magnification, x200) in a single-blinded fashion. Each value was the mean of five visual fields in which duplicate measurements were made.
RT-PCR for VEGF analysis. Total RNA was extracted from allograft tissue using the TRIzol reagent (15596-026; Invitrogen Life Technologies, Carlsbad, CA, USA). Quantification and purity of extracted RNA were determined by the ratio of absorbance at 260 and 280 nm (A260/A280) with a spectrophotometer (GeneQuant 1300; Biochrom, Holliston, MA, USA). Reverse transcription and PCR amplification were conducted using a thermal cycler (PTC-100; Bio-Rad, Hercules, CA, USA), in accordance with the instructions of the RT-PCR core kit (K1622; Fermentas, Hanover, MD, USA). The primer sequences for the sense and antisense chains were as follows: glyceraldehyde 3-phosphate dehydrogenase (GAPDH; XM_002714697): 5'-TCT CGT CCT CCT CTG GTG CTC T-3' and 5'-AAG TGG GGT GAT GCT GGT GC-3'; and VEGF (NM_001082253): 5'-ATG GCA GAA GAA GGA GAC-3' and 5'-ATT TGT TGT GCT GTA GGA AG-3'. The PCR cycle profile was 94˚C for 30 sec, 52˚C for 60 sec and 72˚C for 60 sec, for 30 cycles. The amplification was terminated by a final extension step at 72˚C for 2 min. A positive control (kidney RNA) and an internal control (GAPDH) were amplified at the same time. PCR products were quantified using a gel membrane, which was scanned into an imaging system (Gel Doc 2000, Bio-Rad). The data were normalised as a ratio of gray scale (IOD) of objective band over GAPDH.
Statistical analysis. Quantitative data are expressed as the mean ± standard deviation and tested by one-way analysis of variance (ANOVA). Qualitative data were tested by the Chi-square test and correlation analysis was also conducted to verify the correlation between two parameters. Statistical analysis was performed using SPSS 13.0 for Windows (IBM, New York, NY, USA). P<0.05 was considered to indicate a statistically significant difference.
Results
Effect of O+C treatment on extrahepatic metastases following TAE. The total intrahepatic foci weight of each intervention group was significantly lower than that of the control group on day 30 and during days 30-80 (Table I and Fig. 1, row 1). The inhibition rate of the TAE+O+C group was the greatest among the three intervention groups during the whole experiment (Table I). Extrahepatic metastasis was detected in the control group on day 30; however, it was greatly reduced in the three intervention groups (P=0.006; Table I). During days 30-80, extrahepatic metastasis in the three intervention groups remained significantly lower than that of the control group (P=0.007). Over this time period, the TAE+O+C group demonstrated the least extrahepatic metastasis among the three intervention groups.
Effect of O+C treatment on clear cell number and capsule thickness following TAE.
Microscopically, the nuclei of clear cells were mainly located centrally or slightly eccentrically with dense or occasionally clumpy chromatin. They were detectable in the VX2 hepatic allografts of the four groups (Fig. 1, row 2). The TAE+O+C group demonstrated the highest proportion of clear cells during the whole experiment (P<0.05; Table I).
Complete capsule morphology was displayed in only half of the allografts in the control group on day 30; however, they were formed in the allografts in the majority of the TAE, O+C and TAE+O+C group animals (Table I). Forty percent of the allografts in the control group exhibited partial formation of capsules at spontaneous mortality. There were complete capsules around all the allografts in the three intervention groups on day 80. The three interventions significantly increased the thickness of the complete capsules during the whole experiment (P=0; Table I and Fig. 1, row 3). The thickest capsules, ~5.4 times the size of that in the control group, were observed in VX2 allografts treated with the TAE+O+C regime. The capsules were often adjacent to the necrotic tissues.
An irregular shape of the nuclear membrane and the nucleolus and extension of the endoplasmic reticulum were observed in the control group. Swelling mitochondria were displayed in the tumour cells of the TAE group. A greater number of apoptotic bodies in the cells of VX2 allografts were detected in the O+C group. Additionally, collagen bundles surrounding the tumour cells were clearly increased in the TAE+O+C group (Fig. 1, row 4).
Compared with partial capsules, complete capsules greatly reduced the intrahepatic lesions and intra-abdominal metastasis, as well as lung metastasis (P<0.05; Table II). The thickness of the complete capsules was negatively correlated with the intrahepatic lesions and intra-abdominal and lung metastasis (P<0.05; Table II and Fig. 2).
Effect of O+C treatment on VEGF expression following TAE-induced angiogenesis. The capsules of the VX2 hepatic allografts were rich in microvessels (Fig. 1, row 2), which was revealed with the positive staining of CD31 (Table I and Fig. 1, row 5). Following the TAE procedure, angiogenesis in the capsules significantly increased (Table I and Fig. 1, row 5). However, the TAE+O+C regime significantly downregulated the expression of CD31 and VEGF in the VX2 allografts on Figure 1. VX2 allografts of the four groups on day 30. Row 1, gross morphology of allografts; row 2, clear cells (haematoxylin and eosin staining; magnification, x400 and x100); row 3, capsules (haematoxylin and eosin staining; magnification, x100); row 4, ultra-structure of the tumour cells. The arrow indicates a collagen bundle (transmission electron microscope; magnification, x10,000); rows 5 and 6, positive expression of CD31 and vascular endothelial growth factor (VEGF) with brown grains (immunohistochemical staining; magnification, x100). TAE, transcatheter arterial embolisation; O+C, octreotide + celecoxib. (Table I and Fig. 1, rows 5 and 6). Although the expression of VEGF mRNA in the three intervention groups were significantly lower than that in the control group, there was no significant difference in the expression of VEGF mRNA between the TAE and TAE+O+C groups (Fig. 3).
Discussion
Clear cell HCC has a particular histological type, and accounts for between 0.4 and 37% of all HCC cases (16)(17)(18)(19). Patients with primary clear cell carcinoma of the liver often have a higher rate of HCV infection and capsule formation associated with suppressed vascular invasion. The prognosis is better than that of patients with common type HCC, which is related to the ratio of clear cells. There are few studies that demonstrate the effect of enhancing the proportion of clear cells in HCC. TAE followed by long-term administration of octreotide and celecoxib synergistically induces the secondary clear cells in HCC and therefore greatly prolongs the survival of rabbits with VX2 hepatic allografts (14). Encapsulation, defined as the formation of a clear fibrous layer with collagen content, acts as a barrier to prevent the spread of tumour cells. Capsule formation has been observed in ~90% of primary clear cell carcinomas of the liver (20,21). The TAE procedure significantly enhanced the completeness of the capsules when compared with controls. Furthermore, TAE+O+C combination therapy significantly enhanced capsule thickness resulting in increased number of clear cells in the VX2 hepatic allografts. Hepatic stellate cells (HSCs) are the major cellular component of the HCC capsule and the formation of a HCC capsule may start from the activation of HSCs (15). Encapsulation was usually related to ischemic necrosis of surrounding tissue that may increase production of extracellular matrix in VX2 hepatic allografts. The collagen bundles adjacent to the tumour cells may also induce apoptosis of the tumour cells.
One study considered that an early HCC tumour is an ill-defined nodule without fibrous capsule formation, the fibrous capsule appears as the tumour size increases and the survival of patients with encapsulated HCCs is poorer than that of patients with HCC without encapsulation (22). In contrast to this observation, the current study revealed that encapsulation of VX2 hepatic allografts was negatively related to tumour growth and metastasis. Moreover, there are a number of controversies on vascular invasion and capsule formation (15,23). The present study identified that the TAE+O+C regimen significantly inhibits TAE-induced angiogenesis and VEGF expression in the capsules, as well as increases the completeness and the thickness of the capsules. The main fibrogenic stimuli for stromal cells requires further elucidation.
Although encapsulation of HCC is considered as an important bio-behavior, which would be beneficial to the host, no regime has previously been reported to encapsulate HCC as demonstrated in this study. The potential inhibition of VX2 hepatic allograft growth and metastasis with the TAE+O+C
|
v3-fos-license
|
2020-12-27T11:05:27.068Z
|
2020-11-24T00:00:00.000
|
229476085
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://journal.uniku.ac.id/index.php/ijete/article/download/3683/2255",
"pdf_hash": "aa03308570893a9cdd62b1a0fdbcf31a2b083202",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46435",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"sha1": "6c2223659eea28d3e4dfb371c51451c45827f07e",
"year": 2020
}
|
pes2o/s2orc
|
THE EFFECTIVENESS OF COOPERATIVE LEARNING IN THE COURSE OF MATHEMATIC PROBLEM SOLVING
Problems in mathematics are a challenge that needs solving but the solution cannot be done by using routine procedures. Understanding the problem as well as wealth of experience and strategy is needed to solve these problems. Students need experiences that can be obtained through interaction with others in cooperative learning. Different strategies that may emerge from others can enrich their experiences. The study was conducted to determine the effectiveness of cooperative learning in mathematics problem solving courses. The research method used is descriptive qualitative research methods. The object of research is the effectiveness of group learning in mathematics problem solving courses which include four procedures: understanding problems, planning strategies, solving, and checking answers. The research subjects are fifth semester students of PGSD STKIP PGRI Pacitan study program in the academic year 2019/2020. Sources of data in this study are: (1) Mathematical problem solving test data (2) Observation data during the lecture process (3) Group learning questionnaire data. Test of validity of the data is done through triangulation technique. The results showed that group learning in mathematics problem solving courses had fulfilled the characteristics of cooperative learning, providing benefits to students. And the effectiveness of four mathematical problem-solving procedures resulted in 80% for problem understanding, 80% for strategic planning, 70% for solving math problems, and 75% for checking math problem solving.
INTRODUCTION
cooperative learning in accordance with this procedure will make each individual feel the benefits of group learning. Nurhadi (2004: 112) states that the cooperative learning model is a learning approach that focuses on using small groups of students to work together in maximizing learning conditions in order to achieve learning goals. Most involve students in groups of four students with varying abilities. The characteristics of cooperative learning according to Isjoni (2009: 62) are the roles of each member, direct interaction between students, each member's responsibility for their learning method and also their group friends, the teacher's help in developing personal group skills, and the teacher's interactions with group when needed.
Mathematical problem-solving course is part of curriculum structure in Primary School Teacher Education Study Program at STKIP (Teacher Education and Pedagogy College) PGRI Pacitan. This course appears in the odd semester, which is the fifth semester of the current academic year. By considering the previous description that solving mathematical problems requires understanding, computational skills, appropriate procedures and strategies, and high creativity, it has also been revealed that cooperative learning is superior to individual learning by contributing to the enrichment of individual problem solving strategies and skills. concerned, the implementation of mathematics problem solving lectures at STKIP PGRI Pacitan is carried out by following the procedures and characteristics of cooperative learning.
The implementation of cooperative learning in problem solving courses is carried out through active discussion stage, exchanging information with each other, cooperation between group members to complete the assigned tasks by helping each other so that mutually beneficial activities occur between group members and understanding among the group members. The implementation of cooperative learning in this mathematics problem solving course can be seen from its effectiveness after running for one semester.
Cooperative learning is a social-based learning and according to Anita Lie, this learning model is based on homo homini socius which is contrary to Darwin's theory. This philosophy emphasizes that humans are social creatures. Interactive dialogue or social interaction in learning is very necessary. Thus, cooperation is a very important requirement for survival, including in the learning process. Group learning can be categorized as cooperative learning if it is able to promote: (a) positive interdependence; (b) personal responsibility; (c) promotional interaction (face to face interactive promotion); (d) communication between members (interpersonal skills); and (e) group processing (Supriyono, 2011: 56-58). From some of the definitions and characteristics above, it can be concluded that cooperative learning is learning that involves students in small groups helping each other in learning, there is direct interaction, communication between group members, group processing, and individual responsibility.
Robert (Jihad and Haris, 2008: 14) states that learning with conditions like the one above is effective learning because the students can acquire specific skills, knowledge and attitudes. In other words, learning is effective when there are changes in cognitive, affective, and psychomotor aspects.
Among benefits of implementing cooperative learning according to the results of research by Morgan et al. (2005) is that students become active in solving mathematical problems. Even lazy students who previously did not work begin to participate in the problem-solving process. They are also more motivated to work together in groups than to compete individually. They prioritize curiosity in the process of finding the correct answer rather than simply getting the right answer, the teacher values each student's abilities more by involving each student in group discussions.
THE EFFECTIVENESS OF COOPERATIVE LEARNING IN THE COURSE OF MATHEMATIC PROBLEM SOLVING
Problem-solving ability is one aspect of the ability that is important in learning mathematics. The usefulness and power of mathematics will be very limited if there is no problem solving ability. This is stated by NCTM (2000: 182) "Problem solving is the cornerstone of school mathematics. Without the ability to solve problems, the usefulness and power of mathematical ideas, knowledge, and skills are severely limited." Problem solving also serves as a means of learning about other mathematical ideas and skills. "Problem solving is also important because it can serve as a vehicle for learning new mathematical ideas and skills" (Schroeder & Lester in NCTM 2000: 182). This statement means that through problem solving students will be facilitated to explore more information, ideas, and mathematical skills.
In line with this, the results of research by Hino (2007: 1) in Japan showed that problem solving has the advantage of being able to stimulate efforts to develop material. Pimta, Tayruakham, & Nuangchalem (2009: 381) also stated that problem solving affects the development of thinking skills methods, "problem solving is considered as the heart of mathematical learning because the skill is not only for learning the subject but it emphasizes on developing thinking. skill method as well." Each person has different problems from the others. It depends on who is facing the problem. If he or she can solve a problem immediately then that problem cannot be called a problem. A problem is a problem if the answer to it cannot be immediately found with certain rules or laws (Endang Setyo Winarni & Sri Harmini, 2011: 115). Furthermore, Musser, Burger, & Peterson explained that doing exercises is a very valuable aid in learning mathematics. "Doing exercises is very valuable aid in learning mathematics. Exercises help you to learn concepts, properties, procedures, and so on which you can then apply when solving problems" (Musser, Burger, & Peterson, 2011: 4).
Activity in solving problems means taking an action. This is as expressed by Polya (1962: 117) "to solve a problem means to find such action". Kruse (2009: 2) states that a problem requires someone to act and respond, "problem requires us to act. They require us to respond, to figure out the step and actions we will take." If the problem is structured well, someone will be able to clearly determine how or action to solve it. "If your profile is wellstructured, you would have clear idea of how to solve it" (Van Goundy, 2015: 22).
In essence, problem solving involves three important and fundamental skills, namely the skills to translate questions, the skills to choose strategies, and the skills to operate numbers (Runtukahu & Kandou, 2014: 194-201). These three skills really support the problem-solving process. Polya divides the troubleshooting steps into four procedures. The procedure is to understand the problem, plan strategies, implement strategies and review them (Polya, 1973: 5-6).
The first step is "to understand the problems," which is to formulate what to look for and what data is available. Billstein, (2014: 3) adds several things that can help understand the problem, including simplifying the questions, identifying what you want to find, and organizing the information obtained from the questions.
The second step, "plan a solution" is choosing a strategy. Burton et al., 1994: xx, "choose one of these strategies: guess and check, draw a picture, make a model, act out the problem, make a table, chart, or graph, write a number sentence". The strategy chosen is adjusted to the facts of the problem, how the problem can be solved with one or more existing strategies.
The third step, "solve the problem", is a step taken by solving the problem after determining the technique to be used. Computations carried out in this step can be by manipulating objects or using several other computational options, including using scribbles or written computation (paper and pencil), mental math (mental computation), and calculator (Burton et al., 1994: xxi).
The fourth step is "to look back and check for solutions or answers." This step is done by answering the questions "how can I check the answer" and "whether my answer has answered what was asked". A person can be wrong in solving especially if the arguments or statements in the problem are long. This is as expressed by Polya (1973: 15) in his book "Thus, he should have good reasons to believe that the solutions are correct. Nevertheless, errors are always possible, especially if the argument is long and involved. Hence, verification is desirable." Problem solving is a practical skill. Practical skills are acquired through practice and imitation or imitation. This is stated by Polya (1973: 5), "solving problems is a practical skill .... We acquire practical any skill by imitation and practice". Furthermore, Polya stated that someone who is trying to solve a problem needs to observe and imitate what other people do when solving problems, therefore group learning settings can support a person to more easily imitate what other people in the group are doing in solving math problems, in addition to very possible for discussion and exchange of ideas.
Indicators for measuring mathematical problem solving abilities as expressed by Souviney (1994: 16) by adapting from the Curriculum and Evaluation Standards for School Mathematics are: 1) the ability to formulate problems (formulate problems); 2) implementing various strategies to solve problems (apply a variety of strategies to solve problems); 3) solve problems / find solutions (solve problems); 4) verify / check and interpret the results according to the initial problem (verify and interpret results).
RESEARCH METHOD Research Design
This type of research is a qualitative descriptive study. Data analysis is described in the form of descriptions or qualitative descriptions that explain the implementation of cooperative learning and the effectiveness of cooperative learning in solving mathematical problems. The description of the effectiveness of cooperative learning in solving mathematical problems will be seen from the point of view of four mathematical problem solving procedures which include understanding problems, planning solutions, implementing solutions and checking the results of solving.
Participant
This research was conducted at STKIP PGRI Pacitan, the research subjects were fifth semester students of the PGSD Study Program of STKIP PGRI Pacitan in the academic year 2019/2020.
Data Collection
Data collection techniques using documentation techniques, observation, questionnaires, and tests. The documentation technique is carried out in the initial study to the final stage of the research. Documentation is related to data about the implementation of cooperative learning in mathematics problem solving courses and documentation of mathematics problem solving test results. The documents collected are photos of learning activities, as well as test answer sheets (middle semester test) for solving math problems in the odd semester of the 2019/2020 academic year.
Observations were carried out in a participatory manner where the researcher was involved in learning activities. Thus, the observer (researcher) fully understands the real
THE EFFECTIVENESS OF COOPERATIVE LEARNING IN THE COURSE OF MATHEMATIC PROBLEM SOLVING
conditions (real) which are carried out in group learning activities in mathematics problem solving courses. The questionnaire was used to explore data about the implementation of cooperative learning and reinforcement data to determine the effectiveness of cooperative learning in mathematics problem solving courses. While the test was conducted to obtain data on the effectiveness of cooperative learning in problem understanding procedures, problem solving planning, problem solving implementation, and problem solving results checking.
Data Analysis
The data validity test was carried out through technical triangulation, namely matching research data back based on different data collection techniques, namely by using documentation, observation, questionnaires and tests. The data analysis technique used is qualitative, using descriptive descriptions based on the findings of data that have been collected through the stages of reducing the collected data, describing the data based on the categories, writing the researchers' thoughts based on the results of observations made, linking the discussion with the theory that has been cited and linking the results research by theory.
Implementation of Group Learning
At the initial meeting (phase 1), the class was divided into 5 study groups, with members of each group ranging from 6-7 people. Group members are determined randomly by counting. Each group consists of men and women with various abilities. The agreement made and the rules for each mathematics problem solving lecture is that each week / meeting each group reads and discusses the same material together, then continues by solving the existing problems after the material together as well. Thus, there are no students who missed the material, because all class members study the material simultaneously at every meeting. Problem solving involves group members by discussing and teaching each other and sharing information. The role of the lecturer in phase 1 is to help during the discussion process, get around, monitor and help if there are difficulties, as well as provide a trigger for students if there are difficulties in understanding problems or developing problem-solving strategies.
The next meeting (phase 2) begins with determining the discussion group by means of a lottery. The group selected as discussants is in charge of discussing the problem in the chapter being studied at that time. The discussion begins with working on the problem by members of the non-discussing group (who is not in charge as discussants), thus, each individual and also each group has the same sense of responsibility to be able to solve the material and solve existing problems. The role of the lecturer here is to provide corrections (conclude whether or not the discussion given by the discussion group is true) or even a completely different answer if the discussion group does not have an answer or the answer given is wrong. The implementation of group learning in mathematics problem solving courses is carried out with the procedure as shown in the figure below: When viewed from the learning process in the problem solving course, this learning has fulfilled the cooperative learning category as expressed by Roger and David (Supriyono 2011: 58), which fulfills: a) There is positive interdependence b) There is individual responsibility (personal responsibility) c) There is a promotional interaction (face to face interactive promotion) d) There is communication between members (interpersonal skills) e) There is group processing (group processing). In addition, the learning process in problem solving also has the characteristics of cooperative learning as conveyed by Isjoni (2009: 62), namely that each member has a role, there is direct interaction between students, each member of the group is responsible for their learning method and also their group friends. , the teacher helps develop personal group skills, the teacher only interacts with the group when needed.
Student Response to Cooperative Learning
To ensure that cooperative learning runs properly, the researcher distributes a questionnaire to gather information about the conditions of each group during the group learning process. The questionnaire question is about the group learning process whether it has gone through the proper phases, namely the discussion starting from reading the material or not, whether group members are actively involved in discussions and cooperation or not, and whether or not there are benefits from learning with grouping system like this.
The student response to the questionnaire that was distributed was that some groups started the discussion by reading the material and writing down the important points of the material, but there were also some who immediately tried to work on the questions. Based on the results of the researchers' observations, this condition does occur, some students prefer to work on questions by skipping the reading phase of the material. In cases like this, the phase 2 (even meeting) part 1: 1. choosing the discussant team 2. member of non-discussant teams do the exercises on the whiteboard phase 2, part 2: 1. explanation about the answer of the exercises by the choosen discussant team 2. correction or addition from the lecturer phase 1 (odd meeting) 1. reading and discussing learning material 2. doing exercises together with teams 3. lecturer just as facilitator
THE EFFECTIVENESS OF COOPERATIVE LEARNING IN THE COURSE OF MATHEMATIC PROBLEM SOLVING
lecturer takes advantage of the moment when students ask questions about questions that cannot be done by the student by asking whether the students have read the material or not, because the questions being asked are similar to the examples written in the material. With questions and notifications like that, it can make students aware that reading the material is important, because they will be able to gain knowledge that can be the initial provision for doing questions.
The conditions for the group that did not skip the reading phase at the beginning looked different. Some students seemed to note the main points or important points of the material they read.
Most students have also been actively involved in the group discussion process. Most groups have experienced interaction between members, checking with each other and exchanging information. Discussions are livelier in problem solving because there is something more challenging in the problem. Even though at the beginning of the process, the questions were divided individually with the intention of reducing the burden on the group, at the end there was always a discussion among the group members and all students had notes on their completion.
Students also feel the advantage of this group study in understanding the material, they can get teaching from their friends, in working on lighter questions because they are done together, in tests it also helps because they still remember explanations from friends both from discussion and discussion of questions. Even so, some students also admitted that the explanation from friends during the discussion was sometimes still not in depth.
By paying attention to this description, it turns out that group learning provides several advantages that are not obtained from individual learning. This is consistent with the theory of research by Morgan et al. (2005) is that students become active in solving mathematical problems. Lazy students who previously did not work begin to participate in the problem-solving process. Students are also more motivated to work together in groups than to compete individually. Students prioritize curiosity and the process of finding the correct answer rather than just getting the right answer right away, and teachers value each student's ability more by involving each student in group discussions.
Robert (Jihad and Haris, 2008: 14) states that learning with conditions like the one above is effective learning, where by learning students acquire specific skills, knowledge and attitudes. In other words, learning is effective when there are changes in cognitive, affective, and psychomotor aspects.
Mathematical Problem Solving Test Results Understanding of the Problem
The first step in understanding the problem is defining what to look for and what data is available. Some of the things done are simplifying the questions, identifying what you want to find, and organizing the information obtained from the questions (Billstein, 2014: 3). Some things that must be considered include, whether it is unknown, what data is given, is it possible that the condition is stated in the form of an equation or other relationship, whether the given condition is sufficient to find what is being asked, whether the condition is excessive or contradictory, draws or writes. the appropriate notation (Wahyudi, 2017: 18).
Understanding of mathematical problems can be seen from students' descriptions in writing what is known (the data provided) in the form of sentence descriptions, mathematical equations or relationships, pictures or other forms and descriptions of what is being asked (what is not known). All of the information needed for the work is written down, and unnecessary information (excessive or conflicting conditions) is left out. If the information or
Indonesian Journal of Elementary Teachers Education (IJETE)
p- ISSN: 2615-2606& e-ISSN: 2615-7853 Volume 1, Number 2, November 2020 conditions provided are not sufficient to find what is being asked, students can process the existing data to find the additional data needed.
The results of the problem-solving test showed that most of the students, namely about 85% of them, were able to correctly write down what they knew and what was asked from the questions. Of the 4 math problem solving test questions, the average student can write down what they know and ask about the questions. In test question number 1 about finding the difference between two natural numbers which adds up the result is 28 and if multiplied the result is 192 all students can write what is known and what is asked correctly.
In question number 2 about finding the area of a flat shape contained in a wider flat shape, using several strategies, as many as 3% or 2 students wrote what was asked of the questions with a few errors, namely incomplete writing information. In addition, 20% or 14 students during the process failed to find the shape in question, even though when writing down what was asked was correct. For example, it has been written that what is being asked is the area of DOC, but students still write down the area of the ABCD rectangle, the area of BOC, or the area of ADO without using these areas to find the area of DOC.
In question number 3 about finding the length of the base and height of the triangle with the area and the relationship between the length of the base and the known height of 3% or 2 students write what is known with a slight error, namely incomplete information, which should be written the base length = 4/3 x height of the triangle or a = 4/3 xt, only the base length of 4/3 is written. In this problem, as many as 21% or 15 of the students were trapped in wrongly substituting the base length value when entering the completion stage.
Question number 4: "How many possibilities will someone get in changing Rp. 70,000.00 which was exchanged for Rp. 1,000.00; Rp. 5,000.00; Rp. 10,000.00; Rp. 20,000.00. Money denominations may be more than one ". In this question, many students were incomplete in writing down information, namely that the fractions of money that had to be exchanged were not written down. There are approximately 15% or about 10 students who make this mistake. Thus it can be concluded that the effectiveness of group learning in understanding mathematics problems is around 80%.
Settlement Strategy Planning
The second step is to choose a strategy. The strategy chosen is adjusted to the facts of the problem, how the problem can be solved with one of the existing strategies. "Choose one of these strategies: guess and check, draw a picture, make a model, act out the problem, make a table, chart, or graph, write a number sentence" (Burton et al., 1994: xx).
In this step, students have been able to choose a strategy in accordance with the existing questions. Of the four questions, each one has its own strategy that best fits the type of problem. There are also those that in one problem can be done with two different strategies. For example question number 1, where students are asked to find two numbers which add up to 28 and multiply the result 192. In this question, you can do it with a guess and check strategy and you can also list all the possibilities that exist. Both of these methods have been used by students.
Meanwhile, in solving problem number 3 which contains the area of a flat shape, students choose a strategy using a formula. some students experienced errors in writing formulas, some were lacking in writing formulas, there was also a fatal error in writing formulas. The average error of the formula occurs in problem number 3 which is related to the area of the triangle and also number 2 related to the area of the trapezoid shape, where to find the area of the DOC, you can use the area of the trapezium DCBO minus the area of the COB triangle.
THE EFFECTIVENESS OF COOPERATIVE LEARNING IN THE COURSE OF MATHEMATIC PROBLEM SOLVING
In addition to formulas, strategic errors also occur in question number 2, namely to find the area of DOC in a variety of different ways, students experience errors in determining the relationship between shapes, even students do not know what strategy to use to determine the area of the shape so that just write down the area of all the shapes. Solving problem number 2 requires physical manipulation to provide a real picture of the parts of the existing shapes and the relationships between these shapes.
If converted into percent, there are approximately 20% of students or around 14 students who are still unable to plan the completion strategy well. Thus it can be concluded that the effectiveness of group learning in planning math problem solving strategies is around 80%.
Problem Solving
Things that must be considered in the third step or the problem solving stage are checking each step whether it is correct or not and how to prove that the steps chosen are correct (Wahyudi, 2017: 19). Computations carried out in this step can be by manipulating objects or using several other computational options, either with scribbles (paper and pencil), mental computation (mental math), or using a calculator (Burton et al., 1994: xxi). Most of the students have been able to solve the problem well. However, there are still around 20 students or 30% of the total number of students who are still unable to carry out this third stage properly.
The most common error occurs in number 3, namely when determining the length of the base of the triangle and the height of the triangle. There is a division operation with a fraction which should be in the form of multiplication by inverse but students do not do it like that. In question number 3, there were also many students who were wrong in substituting the values for the length of the base of the triangle and the area of the triangle. This number is also inaccurate in determining the results, for example there is actually still one step that must be done but students have considered it completed and not continued.
There are also many computational errors in question number 4, namely in the case of money changers. Many of the money denominations written by the students were wrong and by not checking in the fourth step, the students' answers remained wrong. In questions 1 and 2 there is also a slight error in calculating the results of multiplication, addition and subtraction. Thus it can be concluded that the effectiveness of group learning in problem solving is 70%.
Answer Checking
The fourth step is "look back and check the solution". This step is done by answering the questions "how can I check my answer" and "does my solution answer the question". This fourth step needs to be done so that someone has a strong reason or believes that the problem solving done is correct (Polya, 1973: 15). In this fourth step, the process and results are checked again, as well as their suitability with the question request.
The checking stage in solving math problems is still rarely done by students when doing tests. Of the total number of 79 students, there are around 25% or 20 students who missed or did not take this step. The checking stage is very necessary, especially in working on question number 4 about money exchange. Some students who missed the check wrote wrong answers because students did not recalculate the total number of fractions they wrote down.
A small number also made mistakes in question number 1, namely because they missed the checking stage, some students wrote wrong answers related to two numbers which
Indonesian Journal of Elementary Teachers Education (IJETE)
p- ISSN: 2615-2606& e-ISSN: 2615-7853 Volume 1, Number 2, November 2020 95 add up to 28 and multiply it to produce 192. While a small proportion of students (2 people) did a check on the work of number questions. 3 about the area of the triangle. So these 2 students enter the values for the length of the base and the height of the triangle they get into the area formula for the triangle to make sure their answers are correct. Things like this should be done by all students so as to minimize errors in providing solutions. Thus it can be concluded that the effectiveness of group learning in checking completion is 75%. The results of the math problem solving test can be seen in Table 1
CONCLUSION AND RECOMMENDATION
Based on the research results, it is known that the implementation of group learning in mathematics problem solving courses has met the cooperative learning categories; which are: (a) positive interdependence; (b) personal responsibility; (c) promotional interaction (face to face interactive promotion); (d) communication between members (interpersonal skills); (e) Group processing. In addition, the learning process in solving this problem also has the characteristics of cooperative learning, namely that each member has a role, there is direct interaction between students, each member of the group is responsible for his learning method and also his group mates, the teacher helps develop personal group skills, the teacher only interacts with the group when needed / needed. The effectiveness of group learning is 85% effective for problem understanding, 80% effective for strategic planning, 70% effective for problem solving, and 75% effective for checking answer.
Recommendations for further research are that research on cooperative learning should be applied to other mathematics courses, for example high-grade mathematics education courses or others. That is because of cooperative learning is rarely applied to mathematics learning.
|
v3-fos-license
|
2023-01-09T05:09:53.486Z
|
2022-12-30T00:00:00.000
|
255521509
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/23/1/420/pdf?version=1672404941",
"pdf_hash": "938822aa3e4cdc7f3adbb98fbd1e885281bd4663",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46437",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Environmental Science",
"Computer Science"
],
"sha1": "938822aa3e4cdc7f3adbb98fbd1e885281bd4663",
"year": 2022
}
|
pes2o/s2orc
|
Multi-Scale Histogram-Based Probabilistic Deep Neural Network for Super-Resolution 3D LiDAR Imaging
LiDAR (Light Detection and Ranging) imaging based on SPAD (Single-Photon Avalanche Diode) technology suffers from severe area penalty for large on-chip histogram peak detection circuits required by the high precision of measured depth values. In this work, a probabilistic estimation-based super-resolution neural network for SPAD imaging that firstly uses temporal multi-scale histograms as inputs is proposed. To reduce the area and cost of on-chip histogram computation, only part of the histogram hardware for calculating the reflected photons is implemented on a chip. On account of the distribution rule of returned photons, a probabilistic encoder as a part of the network is first proposed to solve the depth estimation problem of SPADs. By jointly using this neural network with a super-resolution network, 16× up-sampling depth estimation is realized using 32 × 32 multi-scale histogram outputs. Finally, the effectiveness of this neural network was verified in the laboratory with a 32 × 32 SPAD sensor system.
Introduction
LiDAR plays a vital role in the perception and localization of autonomous vehicles, but the depth information from raw sensors suffers from spatial and temporal noise and resolution in practical scenarios. Recently, dToF (direct Time of Flight)-based depth sensing emerged, but its pixel resolution and power consumption are hard to balance for the sensor designer. With super-resolution neural networks being more commonly used, depth maps can be up-sampled from low to high resolution with the guidance of RGB images. This enables dToF sensors to output high-quality depth maps within limited pixel number and power consumption. Some up-scaling works on depth maps that employ hardware and algorithms are introduced independently below.
In the field of sensor design, research about increasing the pixel density of SPAD sensors has been going on for decades, from the 128 × 128 sensor [1] to the MEGA pixel array based on the 3D-stacked process [2]. Due to the maturation of SPAD device technology, smaller numbers of pixels, higher fill factors, and lower dark count rates (DCRs) have been achieved [3]. Other than device improvements, lots of research has focused on digital signal processing. In [4][5][6], the chip area is dominated by the memory resource of histogram data storage, the Time-to-Digital Converter (TDC), and the depth calculation units. In addition, the speed of hardware is limited by the high-bandwidth IO requirements due to the huge amount of raw histogram data outputs. In [7], Zhang et al. develop a partial histogram method to save the digital logic and on-chip SRAM area in a ratio of 14.9 to 1. However, these digital signal processors only focus on hardware optimization. With limited flexibility, the trade-off between sensor performance and on-chip resource is difficult to achieve.
Recently, neural networks have been introduced to improve the quality of 3D imaging. In [8][9][10][11][12], a neural network is capable of recovering the depth information from a flood of reflected photons with significant spatial and temporal noise, such as multi-path reflection and ambient light. In addition, thanks to the spatial and temporal correlation within the neural network, the RGB and intensity information can be fully utilized as prior knowledge to further improve the 3D images. In [8], by employing a sparse point cloud and gray-scale data as the inputs to the 3D convolution neural network, the temporal noise is filtered out, and the spatial density is increased. In order to achieve a better end-to-end super-resolution imaging quality, the authors of [9] take the optical phase plate pattern and its spatial pattern distribution function as the neural network's prior knowledge, which combines optical design and a reconstruction neural network. The authors of [10] propose a two-step interpolation scheme for high-speed 3D sensing, and a high-quality imaging system with a frame rate of over 1 k-fps is demonstrated. A multi-feature neural network using the first depth map and the second depth map is designed to obtain the up-sampling depth in [11], and the feasibility of object segmentation on a small batch is proven in [12].
To fill the gap between efficient hardware usage and algorithm design, a new algorithm combining a novel data format and a two-step neural network is proposed. Firstly, a multi-scale histogram is adopted on hardware to save on-chip complexity and keep more information. Secondly, a probabilistic encoder and Super-Resolution (SR)-Net are proposed to recover and up-sample the depth map. The probabilistic encoder is responsible for extracting the depth information from histogram bins. Based on this kind of design, a probabilistic encoder-based neural network is first introduced for the SPAD denoising problem. Generated by the probabilistic encoder, the low-resolution depth image is up-scaled with SR-Net. To recover the high-resolution depth image, U-Net-based SR-Net conducts the 4× up-sampling of the output depth of the probabilistic encoder. Thirdly, by simulating the depth maps using an open depth dataset, the neural network is trained for specific hardware design under the same parameters set in the depth sensor, which guarantees performance in real tests. Finally, by testing the algorithm in the laboratory, we show that the proposed solution can recover the depth image from 32 × 32 multi-stage histogram bins, which demonstrates feasibility when implementing the network from training datasets to the real world. This work is organized as follows: The principle of the proposed neural network is described in Section 2. The simulation details of the generated dataset and training results are shown in Section 3. In Section 4, the sensor design, hardware architecture, and data acquisition are introduced. Section 5 compares our method with other advanced LiDAR systems. Furthermore, the discussion and conclusion are presented in Sections 5 and 6.
Principle of Proposed Probabilistic Deep Neural Network
TCSPC (Time-Correlated Single-Photon Counting) is used to estimate distance information as a classical method. The on-chip memory footprint needed to construct the full histogram of returned photons is immense; therefore, multi-scale histograms represent an efficient way to extract peak values. However, there are many concerns about multi-scale histograms: (i) For the temporal information presented by returned photons, one surface per pixel is assumed, since the peak extraction can only find one peak, which may not be valid at object edges. (ii) For a few counted photons, the simple max selection criterion would be very noisy and would probably lead to sub-par results, since depth estimation works by selecting the time bin with the maximum photon count and then running another measurement acquisition that samples finer time bins within the initially selected bin at the coarser scale. Considering these cons of partial histograms, we propose a combination algorithm with classical multi-scale histograms for temporal depth information extraction and U-net-based neural network for spatial up-scaling and denoising.
As shown in Figure 1a, compared with the previous methods, a multi-scale histogram is adopted to save on-chip memory and calculation load for peak finding. The full histogram in Figure 1b is used at first to calculate the reflected photon number of each bin. Full histograms cost huge memory area and computation load for peak finding. Therefore, partial histograms are designed to save on-chip memory and calculation load. However, as shown in the partial histogram calculation in Figure 1c, all information except for the peak value is discarded during zooming into the next stage, which leads to inaccurate depth estimation. In the proposed multi-scale histogram method (Figure 1d), all 48 bins are taken as the inputs to the neural network. By training the neural network on the simulated dataset, an accurate depth map is obtained in this way. The architecture of the proposed neural network is demonstrated in Figure 2. Three 16-bin histogram distributions are obtained after 4 k pulses for each pixel. Due to the distribution of reflected photons following poison distribution P, the encoder part of the VAE [13] is introduced to learn the mean value (Equation (1)) for every independent distribution for every pixel. The reparametrization trick is adopted to obtain the estimation value of the Gaussian distribution with average m i,j and deviation σ i,j . According to the returned TDC results, the peak range is zoomed in the next stage. (b) Probabilistic encoder structure, which combines the CNN-based encoder part and the reparametrization trick. U-Net based SR-Net is shown in (c). SR-Net is responsible for the up-sampling task in this neural network.
The three histograms are constructed using different gratuities of the 12-bit TDC; therefore, the actual depth resolution ranges from TDC<11:8> to TDC<7:4>, to TDC<3:0>. Obtained with three multi-scale histogram probabilistic encoder branches, these initial multi-scale depth maps are multiplied by different ratios and added together as the input to the next up-sampling net. According to the experiments, the actual ratio is efficient and vital for the convergence of the neural network. The resolution of the TDC is used as prior knowledge for the multi-scale histogram. For the up-sampling net, the U-net [14] architecture is applied as SR-Net, which is capable of dealing with image segmentation and super-resolution issues well. Furthermore, its usage of multi-level concatenation contributes greatly to the confusion of multi-scale spatial information. Considering the hardware characteristics of SPAD imagers, this work aims to design an algorithm that avoids using other assistant information, such as RGB images, albedo maps, and intensity images. Although the RGB image contributes a lot to the final up-sampled image quality, it poses a challenge to the misalignment problem between RGB image pixels and SPAD array pixels. Compared with the RGB image, the density map (confidence map) is an alternative method that consumes fewer registers and obtains the confidence guidance map from a hardware point of view. Owing to the time consumption for collecting real SPAD measurements, the training datasets in this experiment were simulated using the Middlebury dataset [15] with 18 scenes. Compared with the other simulated datasets used in super-resolution tasks, multi-scale histograms are generated for each training image and test image. To mimic the multi-scale histogram behaviors, firstly, the SPAD measurement histograms with 16 bins are simulated with the bin size of 7680 ps. Secondly, the histogram range is zoomed into 480 ps, and the same number of histogram bins is maintained. Finally, each bin has a resolution of 30 ps in the last stage. To tolerate the different noise levels in real scenes, the Signal-to-Background Ratio (SBR) ranges from 0.05 to 5. The simulated data for training the neural network are used according to Equation (2).
where n is the sample index from objects, η is the detection efficiency, and γ is the reflection factor that is related to distance and object materials. The number of arriving photons, τ, follows independent distribution for pixel (i, j). Ambient photons α result from background lights, such as a fluorescent lamp, sunlight, and other potential light sources. The hot noise of the SPAD also brings some erroneous counts (d). Additionally, the error between probabilistic encoder outputs and true distance is evaluated using L2 (denoted as Mean Squared Error; Equation (3)).
Under real lab conditions, the raw depth value is computed using MLE (maximum likelihood estimation). The depth value is calculated with the full histogram in the lab, and d i,j is estimated using Equation (4) (cited from [10]).
where b is the median of the bins used as the measured ambient level. The entire neural network loss, L (Equation (5)), consists of the Root Mean Squared Error (RMSE) among the probabilistic encoder-predicted initial depth with 32 × 32 downsampled depth value from the ground truth, the loss of the up-sampled depth image error and the 128 × 128 original depth value. Figure 3 demonstrates the simulated images of three multi-scale histograms using the test dataset with SBR = 5. Figure 3a shows one ground truth image from the test dataset. The processed peak value images for TDC<11:8>, TDC<7:4>, and TDC<3:0> are shown in columns (b), (c), and (d). The coarse range of TDC<11:8> showed the coarsest range of the detected objects with resolution of 7680 ps in this experiment, which means that the distance range in column (b) is above 1.115. In column (c), the data located in TDC<7:4> contain rich outline information of the objects; due to this, the greatest depth value in the simulated dataset is between 2 m and 3 m. Column (c) shows a distribution similar to the original object surface. In column (d), the least four significant bits of TDC<3:0> fill in more details, which show minor variations in the object surface. After 100 epochs of training, the neural network achieved a lower RMSE, with 0.022 m, on the test dataset.
Chip Description and Data Acquisition
The block diagram of the SPAD array and the TDCs is introduced in Figure 4. The sensor consisted of a 128 × 128 SPAD of 25 um in diameter and 32 shared TDCs. Measurements were performed with a dark count rate (DCR) of 100 kcps at 70 C, and photon detection probability (PDP) was1% at 940 nm. The low-resolution histogram was sampled by 4 × 4 combined pixels under a measured SNR condition of 14 signal photons and, on average, 450 noise instances in all pixels. Considering the affect that the coarse-grained TDC brings, the sampling numbers of three stages were configured as 2 k, 1 k, and 1 k. During the whole dynamic range of the TDC, the simulated probabilistic SNR for each detection was 14/450, and there were thousands of pulses to perform histogram calculation. Figure 5 shows the hardware platform including the SPAD array and VCSEL [16]. The SPAD array was fabricated with TSME 180 nm technology. The 32 × 32 SPAD array (Figure 5a) had an SPAD size of 128 × 128, and each 4 × 4 SPAD array was combined as an output to obtain the 32 × 32 depth map. The prototype of the imaging system is shown in Figure 5b, which was set parallel to the target statue 1 m away shown in Figure 5c. The VCSEL was emitted at the wavelength of 940 nm, and the laser pulse was set to 1 ns. The algorithm was tested under a halogen lamp with a wide spectral range (350-2500 nm), and the background count was 240 kpcs on average in the indoor environment. To increase the robustness of the algorithm, the first stage needs to sample more data than the two later stages. As demonstrated in Figure 1a, the VCSEL was driven 2 k, 1 k, and 1 k times in the multi-scale histogram mode for each stage. To make the neural network inference time acceptable, a deep learning accelerator (DLA) is designed in this work in the algorithm end. As shown in Figure 6, the DLA is composed of 512 MACs and 512 KB on-chip ping-pong SRAM for storing the immediate features and weights. The process element (PE) array is configured as 16 atomic channels and 32 atomic kernel operations. The accelerating performance estimation for the proposed neural network was conducted by modeling the DLA with the deep learning accelerator tool in Pytorch. The adopted DLA was implemented using Verilog and synthesized at 500 MHz with a DC compiler under a TSMC 40 nm process. The synthesized system performance is shown in Table 1, and the power consumption and frame rate comparison with other LiDAR systems are reported in Table 2. Figure 6. The adopted DLA architecture diagram.
Results and Discussion
The validation results of the proposed neural network on hardware are shown in Figure 7. In Figure 7a-c, the peak maps of the multi-scale histogram coincide with the simulated data, which supports the feasibility of the algorithm on hardware. Regarding the edges of the detected objects, the receiver is triggered by some photons reflected by the background. In Figure 7a, the distance values on the outline are affected by background points, which show various colors, i.e., flying noise. According to the output result in Figure 7d, the proposed neural network was effective in extracting the depth map of this experimental set and recovered the details of the statue well.
The performance of related research works is summarized in Figure 8. The SR-Net proposed in this work recovered more detailed depth features of the detected object. In Figure [16], the same backbone as SR-Net was adopted, but the ML-based spatial resolution up-scaling model proposed in [16] failed to improve the SNR, and spot noise was clear. In [10], the result kept the intrinsic distortion characteristics of SPAD sensors, including the line offsets, which can be corrected using a convolutional neural network. Figure 9 shows the different results obtained using maximum likelihood estimation (Equation (4)) and SR-Net. Compared with the 128 × 128 array output, more dedicated features of the statue can be observed, but less smooth. By zooming in on the details in the sub-figure, the common "grid effect" of the up-sampling network can be observed in this inference result. The grid effect can be eliminated by adding other information, for example, an RGB image, or by expanding the neural network size. The proposed neural network combined with the probabilistic encoder and SR-Net is a pragmatic method to use raw histogram bin data as the inputs to up-sample a high-resolution depth map. The proposed neural network combined with the probabilistic encoder and SR-Net is a practical method to employ raw histogram bin data as inputs to up-sample a high-resolution depth map. Figure 9. Comparison with maximum likelihood estimation. The "grid effect" generated by the up-sampling neural network is shown as a sub-image.
Conclusions
In this work, compared with the on-chip histogram peak calculation algorithm, the probabilistic encoder and SR-Net combined neural network is proposed for up-sampling depth maps with multi-scale histogram outputs. Regarding the algorithm part, this work adopts the encoder network of a probabilistic encoder, which firstly saves hardware consumption for detecting the peak value of histograms in SPAD imaging systems. To verify the algorithm, a simulated dataset based on the SPAD characteristic was generated using the Middlebury dataset. By training the up-sampled network, the method proposed in this work recovered a 16×-size depth image with RMSE = 0.022 m on the generated dataset. Regarding the hardware platform, the multi-scale histogram bins from a 32 × 32 sensor imaging system are extracted and re-scaled up to a 128 × 128 depth map with rich details. By implanting the algorithm on a deep learning accelerator, the latency of the entire neural network was decreased, and the frame rate reached 20 fps, which is competitive with respect to other state-of-the-art works. The proposed SPAD imaging system offers a perspective on hardware and software co-design. Furthermore, this work could be capable of expanding up-sampling from 32× to 64× in the future.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2012-05-22T00:00:00.000
|
5863918
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2012/506486.pdf",
"pdf_hash": "581860e7e249ffd14883690db20538c8dee7e025",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46438",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"sha1": "02a0f89fc0f40f931e12e416703d92ceab7fa876",
"year": 2012
}
|
pes2o/s2orc
|
Optical Method for Cardiovascular Risk Marker Uric Acid Removal Assessment during Dialysis
The aim of this study was to estimate the concentration of uric acid (UA) optically by using the original and processed ultraviolet (UV) absorbance spectra of spent dialysate. Also, the effect of using several wavelengths (multi-wavelength algorithms) for estimation was examined. This paper gives an overview of seven studies carried out in Linköping, Sweden, and Tallinn, Estonia. A total of 60 patients were monitored over their 188 dialysis treatment procedures. Dialysate samples were taken and analysed by means of UA concentration in a chemical laboratory and with a double-beam spectrophotometer. The measured UV absorbance spectra were processed. Three models for the original and three for the first derivate of UV absorbance were created; concentrations of UA from the different methods were finally compared in terms of mean values and SD. The mean concentration (micromol/L) of UA was 49.7 ± 23.0 measured in the chemical laboratory, and 48.9 ± 22.4 calculated with the best estimate among all models. The concentrations were not significantly different (P ≥ 0.17). It was found that using a multi-wavelength and processed signal approach leads to more accurate results, and therefore these approaches should be used in future.
Introduction
Uric acid (UA), a final product of the metabolism of purine, is a very important biological molecule present in body fluids. It is mostly excreted from the human body through the kidneys in the form of urine. The concentration of UA in blood increases when the source of UA increases or the kidneys malfunction. Hyperuricemia is a symptom when the UA concentration is above 7 mg/dL. UA is hard to dissolve in blood and will crystallise when supersaturated. The UA crystallites are deposited on the surface of the skin, in joints, and particularly in the toes, resulting in gout. Analysis of the UA concentration in blood helps to diagnose gout. In addition to gout, hyperuricemia is connected with lymph disorders, chronic haemolytic anaemia, an increase in nucleic acid metabolism, and kidney malfunction. Elevated serum UA contributes to endothelial dysfunction and increased oxidative stress within the glomerulus and tubulointerstitium, with associated increased remodelling fibrosis of the kidney [1]. A high level of serum UA, hyperuricemia, has been suggested as an independent risk factor for cardiovascular and renal diseases [2] especially in patients with heart failure, hypertension, and/or diabetes [3][4][5], and has been shown to cause renal disease in a rat model [6]. UA is mostly associated with gout, but studies have implied that UA affects biological systems [7] and could also influence the risk of higher mortality among dialysis patients [8], although the pathogenic role of hyperuricemia in dialysis patients has not been fully established [9]. High caloric foods and alcohol as well as disorders of the organs and tissues are the main causes of hyperuricaemia, obesity, kidney stone formation, and even gout [10]. It is likely that high UA levels in the blood are the reason for the emergence of renal microvascular disease, which may be a key mechanism in inducing salt-sensitive hypertension [11]. Harm can be prevented and reduced by early diagnosis and monitoring, especially by screening obese patients [12]. It would be advantageous to measure the concentration of UA during dialysis online. For creating this opportunity it is necessary to create accurate and reliable models. UA may be the novel marker molecule for estimating the quality of dialysis procedure, since the UA is uremic toxin itself, removal pattern and amount of this compound during the dialysis are informative for patients and medical personnel.
Ways of monitoring UA, dialysate, and other biological fluids with optical tools have been shown previously by our and other groups [13][14][15]. If you use a simple signal processing tool for smoothing and calculating the first derivate of UV absorbance and/or absorbance or processed absorbance values from several wavelengths, more reliable results are achieved [16][17][18][19][20]. An effective way of estimating UA concentrations using the UV technique has been shown in previous studies by our group. Current paper, involving larger amount of patients from different countries, presents more general and accurate models making it possible to apply the technique in the large patient community.
The aim of this study was to estimate the concentration of uric acid (UA) optically by using the original and processed ultraviolet (UV) absorbance spectra of spent dialysate. Data from different dialysis centres and over a long period was used to build models to increase general validity and reliability.
Materials and Methods
All of the studies were performed after approval of the protocol by the Regional Ethical Review Board, Linköping, Sweden, and by the Tallinn Medical Research Ethics Committee at the National Institute for Health Development, Estonia. Informed consent was obtained from all participating patients.
During the period 1999-2009 seven studies were carried out in the Department of Dialysis and Nephrology at the Linköping University Hospital in Sweden and at the North Estonian Medical Centre in Estonia. Clinical setup of the experiments is presented in Figure 1. A summary of the studies and information about the participating patients are presented in Table 1.
The dialysers used in the studies, the effective membrane areas of the dialysers, the number of sessions when the respective dialyser was used, the type of dialysis machine used, and blood flow for the studies are presented in Table 2.
For all of the studies, samples of spent dialysate were taken at discrete times for analysis ( Table 3). The numbers under "sampling time" correspond to the number of minutes after the start of hemodialysis. The dialysate samples were taken at 255, 270, and 300 minutes when the duration of sessions was long enough. Also, the sample from the total dialysate collection tank was included in the analysis in most cases. Pure dialysate was collected before the start of a dialysis session and used as the reference solution when the dialysis machine was prepared and conductivity was stable. The concentration of UA was determined in the Clinical Chemistry Laboratories at the North Estonian Medical Centre and at Linköping University Hospital using standardised methods. The accuracy of the methods for the determination of UA in dialysate was ±5%.
Double-beam spectrophotometers (UVIKON 943, Kontron, Italy, and JASCO V-570, UV/VIS/NIR spectrophotometer, Japan, in Linköping and SHIMADZU UV-2401 PC, Japan, in Tallinn) were used for the determination of UV absorbance. Spectrophotometric analysis over a wavelength range of 190-380 nm was performed by an optical cell with an optical path length of 1 cm. A lower UV absorbance value is obtained at all wavelengths versus time due to a decreased concentration of UV-absorbing compounds in the blood when transported through the dialyser into the dialysate and removed from the blood during the dialysis treatment. The treatments were also monitored with a single wavelength online, and thereby all interruptions, self-tests, alarms, and so forth could be identified directly on a screen. Some of the measured values (absorbance or concentration) were excluded from data before analysis. The exclusion criteria were incorrect or illogical values of measured concentration or absorption, for example, sampling coexisting with selftests of the dialysis machine.
The obtained UV spectra were processed with a signal-processing tool using a Savitzky-Golay algorithm for smoothing, and the first derivative calculation wherein a smoothing window with nine points was used ( Figure 2). Panorama Fluorescence 1.2 was used for signal processing, and multiple stepwise regression analysis was performed with Statistica 9.0. Final data processing was performed in EXCEL (Microsoft Office Excel 2007).
On the basis of the UA concentrations measured in the laboratory, measured UV absorbance spectra and processed UV absorbance spectra, multiple regression analysis was carried out on the calibration set of material (data from 75 randomly selected dialysis procedures). UA was set as a dependent variable, and UV absorbance values between 190-380 nm were set as independent variables. Multiple linear regression (MLR) analysis using the forward stepwise regression method was employed to determine the best wavelengths for the models [21][22][23][24][25]. Using the stepwise regression method helps us avoid mistakes in the models due to the possible collinearity of the independent variables [26]. In both UV absorbance (UVa) and the first derivate of UV absorbance (UVd), the number of steps was increased until no relevant improvements were achieved by means of model performance. At each step the model for estimation of UA was saved, resulting in different models for both UVa and UVd.
Models for the calculation of the concentration of UA (Y ) are in the form where a is intercept, b is slope and x is an independent variable (the value of original or derivate UV absorbance at a certain wavelength). The obtained models were used on the data from the remaining 113 dialysis procedures (validation set) to calculate the concentration of UA and compare these values with the laboratory results and validate different models.
Systematic error was calculated for the models as follows [26]: where e i is the residual and N is the number of observations. Standard error was calculated for the models as follows: 4 The Scientific World Journal Root mean squared error was calculated for the models as follows:
Results
During regression analysis, three steps were considered sufficient after estimation of the behaviour of the root mean squared error (RMSE). From Figure 3 it was concluded that adding one additional wavelength to the models did not markedly improve the results in terms of RMSE. This was also confirmed by a t-test for residuals, which were significantly different (at P level 0.05) between models that used an absorbance or first derivate of absorbance value from one, two, or three wavelengths and which were not different in the case of models which used four wavelengths. As a result of regression analysis, three models for UV absorbance and three models for derivate of UV absorbance were found wherein each used an absorption or derivate of absorption value from one, two, or three wavelengths, respectively ( Table 4). The models were marked as UVa 1WL for the model which used a UV absorbance value from one wavelength, UVa 2WL for the same information from two wavelengths, and so on. UVd 1WL-UVd 3WL marks models which used a derivative value of UV absorbance from one, two, or three wavelengths. Figures 4 and 5 show the wavelengths of original UV absorbance and first derivate of UV absorbance included in the models for estimating UA concentration.
The models presented in Figures 4 and 5 were applied to the material to calculate UA concentrations, R 2 , BIAS, SE, and RMSE. The results are presented in Table 5.
The concentrations achieved by the models were not significantly different (P = 0.17-0.48) from the observed concentrations in the laboratory for any model.
The systematic and root mean squared errors were significantly different (at P level 0.05) in the following cases (validation group): The differences between individual values of the UA concentration from the laboratory and UA values from two models (UVa 3WL and UVd 3WL) are presented in Figure 6.
The root mean squared error decreased as wavelengths were added to the models in the case of both the UVa and UVd models, and the decrease was slightly greater in the case of UVd models.
These results demonstrate that using UV absorbance from several wavelengths provides more accurate results in the estimation of the concentration of UA. Also, using information from the first derivate of spectra instead of original UV absorbance spectra produces a notable effect.
Discussion
The results in Table 5 show that it is possible to estimate UA concentration in spent dialysate using UV absorbance data. The presented models were built on the calibration set of material which contained absorbance values from Tallinn, Estonia, and Linköping, Sweden. The data included in the study were collected during seven studies from 1999 to 2009. The coefficient of determination, R 2 , between the laboratory and calculated values of UA are higher or equal in the case of the UVd (single/two/three) compared to the UVa (single/two/three) (0.86/0.88/0.92 versus 0.91/0.93/0.93) (Figures 4 and 5). Also, the systematic error and RMSE are lower if we use several wavelengths and/or derivate spectra (Table 5). This indicates that using several wavelengths instead of a single one produces a significant effect, which is larger when we use processed spectra instead of original absorbance spectra. However, it seems that adding a third wavelength to the UVd model does not improve results in terms of R 2 , although the results of systematic error and RMSE improve. For describing the differences between individual values of the UA concentration from the laboratory and UA values from models, a Bland Altman plot for two models (UVa 3WL and UVd 3WL) was created ( Figure 6); differences in UA values were somewhat smaller in the case of the model using derivate spectral values.
Considering the improvement in the accuracy of the model, systematic error and RMSE, the signal processing and information from several wavelengths should be used in the future. In this study the best result was achieved with the model using derivate spectra values at three wavelengths.
It was found that haemodialysis adequacy can be quantified using UV absorbance of spent dialysate. By using this method, it is possible to reduce costs by reducing the number of blood samples and amount of laboratory analyses [27].
A good way of estimating UA concentrations using the UV technique has been shown in previous studies [13,14,[16][17][18][19][20], but if we use signal processing tools and absorbance information from several wavelengths, we can essentially improve the accuracy and reliability of the results.
A previous study by our group [28] indicated that app. 90% of the cumulative and integrated UV absorbance measured by the optical dialysis adequacy sensor originates from the ten main peaks of a particular dialysis treatment, The Scientific World Journal one of which is UA. Another study where HPLC analysis was used indicated that the main solute responsible for UV absorbance of around 280 nm is UA [29].
As can be seen from Figure 7, the contribution of UA to total UV absorbance (UV (UA)/UV average presents an average absorbance sourced from UA in the dialysate divided by average UV absorbance of the whole dialysate) is relatively large in the wavelength region of 280-310 nm. This explains the wavelengths appearing in the models. UA absorbance spectra have one minimum around 265, and this explains why the wavelength is also included in the models.
The high correlation between UV absorbance and UA could be explained by the characteristic absorbance around 294 nm for UA in combination with the relatively high molar extinction coefficients of UA in this wavelength region compared to other chromophores among uremic retention solutes eliminated from blood into spent dialysate during dialysis [30]. This makes it possible to determine UA concentration even when the technique does not solely measure UA.
The use of a Savitzky-Golay algorithm for smoothing and first derivate calculation is an effective method of correcting baseline effects in spectra, which could explain the improvement in accuracy. Using UV absorbance and processed UV absorbance information from several wavelengths reduces randomness and is probably the reason why better results have been achieved.
In this study, multiple linear regression (MLR) analysis using the forward stepwise regression method was used to determine the best wavelengths for models. Using the stepwise regression method helps us to avoid mistakes in the models due to the possible collinearity of independent variables. It seems that models developed with MLR are relevant and work well in a validation set of material, although using other approaches like partial least squares regression (PLS-R) or principal component regression (PCR) to create models should be considered in the future [26].
The clinical aim in the future is to develop an online monitoring system that offers an estimation of the removal of clinically important solute and marker UA during haemodialysis.
Also, regarding the optical properties of UA, it is possible to develop an optical system to measure the UA concentration in blood and/or urine. This makes it possible to rapidly detect hyperuricemia widely and at an early stage. This is very important in preventing serious clinical issues caused by hyperuricemia [2-6, 8, 11, 12, 31].
An accurate optical method makes it possible to measure UA rapidly online without the need for blood samples and disposables or chemicals. Using a simple signal-processing tool and UV absorbance values from several wavelengths could be very helpful in achieving more accurate and reliable results.
Conclusion
This study investigated the effect of using several wavelengths and a simple signal processing to estimate the concentration of UA in dialysate using an optical method. The data analysed were collected over 10 years: 60 patients participated and 188 dialysis sessions were monitored in various centres in different countries. It was found that using a multi-wavelength and processed signal approach leads to more accurate results. This approach enables us to develop an advantageous, reliable, and cost-effective method of measuring the concentration of UA, an independent risk marker of cardiovascular and renal diseases and also a novel risk factor for type 2 diabetes mellitus. Developed algorithms could be used in optical dialysis quality monitors; these monitors should be integrated to dialysis machines and with these several parameters; UA among them is possible to monitor during the dialysis. No blood will be monitored; removal on substances is possible to estimate only by monitoring the spent dialysate. A future method evaluates the treatment dose and makes it possible to control treatments against set target values.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2013-03-22T00:00:00.000
|
18901854
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0059532&type=printable",
"pdf_hash": "b1d0bbdeb08ce928ca7e6b71cdbedd9b0e4b1df5",
"pdf_src": "Grobid",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46440",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b1d0bbdeb08ce928ca7e6b71cdbedd9b0e4b1df5",
"year": 2013
}
|
pes2o/s2orc
|
Prognostic Role of MicroRNA-181a/b in Hematological Malignancies: A Meta-Analysis
Background Emerging evidence has shown that miRNAs participate in human carcinogenesis as tumor suppressors or oncogenes, and have prognostic value for patients with cancers. In recent years, the miR-181 family was found dysregulated in a variety of human cancers and significantly associated with clinical outcome of cancerous patients. MiR-181a and miR-181b (miR-181a/b) were the most investigated members in the family. However, the results of miR-181a/b from different studies were inconsistent. Therefore, we performed a meta-analysis to summarize all the results from available studies, aiming to delineate the prognostic role of miR-181a/b in human cancers. Methods The identified articles were retrieved from the two main on-line databases, PubMed and EMBASE. We extracted and estimated the hazard ratios (HRs) for overall survival (OS), which compared the high and low expression levels of miR-181a/b in patients of the available studies. Each individual HR was used to calculate the pooled HR. Results Eleven studies of 1252 patients were selected into the final meta-analysis after a strict filtering and qualifying process. Fixed model or random model method was chosen depending on the heterogeneity between the studies. The subgroup analysis showed that high expressed miR-181a/b could prolong OS in patients with hematological malignancies rather than low expression level (HR = 0.717, P<0.0001). But the expression of miR-181a/b was not significantly relative to OS in patients with various cancers (HR = 0.861, p = 0.356). Conclusion Our study indicates that the expression level of miR-181a/b is significantly associated with OS in hematological malignancies and can be an important clinical prognostic factor for those patients.
Introduction
MicroRNAs (miRNAs) represent a class of highly conserved and small (average of 22 nucleotides) noncoding RNAs which can regulate gene expression and sequentially modulate various biological processes.MiRNAs were first discovered by the laboratory of Victor Ambrose in 1993 [1], and the knowledge of their critical roles in regulating proliferation, differentiation, apoptosis, development, metabolism and immunity has been greatly advanced recently.Circumstantial evidence has indicated the potential involvement of several miRNAs in tumorigenesis, after the first report that miR-15 and miR-16 were frequently deleted and/or downregulated in B-cell chronic lymphocytic leukemia in 2002 [2,3].A meta-analysis performed by Fu et al. [4] showed that elevated miR-21 expression was significantly associated with poor survival in patients with various types of carcinomas.Hence, miRNAs might act as oncogenes or tumor suppressors and they could play a potential role as diagnostic and prognostic biomarkers of cancers.
Like protein coding genes, miRNA sequences can be grouped into families and the relationship between their structures and functions can be learnt from multiple sequence alignments in miRNA families.However, the base-paired secondary structure is often conserved in miRNAs, rather than the conservation or similarity in primary sequences as in proteins [5].A miRNA family usually has several members which are different in 1-2 nucleotides only.MiR-181 family is one of those miRNA families, which generally express in 70 species and various human cancers [6].This family includes 4 members (miR-181a, miR-181b, miR-181c and miR-181d) and they are highly conserved in the seed-region sequence and RNA secondary structure.
Among them, miR-181a and miR-181b (miR-181a/b) which locate on the same loci of chr1q31.3 and chr9q3.33 are the most studied.Ciafre et al. [7], firstly reported that expression of miR-181a/b was significantly downregulated in primary glioblastomas and human glioblastoma cell lines compared to normal brain tissue, by using microarray and northern blot analysis.Thereafter, miR-181a/b was discovered abnormally expressed in various cancers including solid tumors and hematological malignancies.As in glioblastoma, significant down-regulation of miR-181a level was also observed in squamous lung cell carcinoma (SQCC), oral squamous cell carcinoma (OSCC) and non-small-cell lung cancer (NSCLC) [8][9][10].However, miR-181a was significantly overexpressed in MCF-7 breast cancer cells and hepatocellular carcinoma (HCC) cells [11].Other studies also reported that miR-181a had different expression levels in hematological malignancies.It is upregulated in acute myeloid leukemia (AML), especially in M1 and M2 subtypes, and myelodysplastic syndromes (MDS) [12,13], but downregulated in multiple myeloma (MM) and chronic lymphocyte leukemia (CLL) [14,15].Notably, miR-181b has the same expression pattern as miR-181a in human cancers.Consider that the seed region of miR-181a/b is highly aligned and most of their predicted targeted genes are overlapped (Figure S1), miR-181a/b might co-express and play critical roles together in human cancers.Since the results of the present studies are inconsistent, it is unclear that miR-181a/b acts as oncogene or tumor suppressor.However, we can find some clues from several clinical studies investigating miR-181a/b as a prognosis factor in patients with cancers.Therefore, this literature review and meta-analysis were carried out to summarize the studies globally.
Guidelines and Search Strategy
This meta-analysis was performed by the guidelines of PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) Statement issued in 2009 (Checklist S1).We carefully searched online database PubMed and EMBASE to identify relevant published studies from Jan 1st, 1993 to Oct 5, 2012.For PubMed, the contextual query language (CQL) was ''(mir-181[Title/Abstract]) OR (microRNA-181[Title/Abstract]) OR (mir-181a[Title/Abstract]) OR mir-181b[Title/Abstract]''; for EMBASE, the CQL was ''(mir-181 or microRNA-181 or mir-181a or mir-181b).ti,ab''.The references manager software EndNote(X5, Bld5478) was used to check out duplications.The candidate studies should follow these inclusive criteria: (i) it studied miR-181a/b in any type of human cancers; (ii) it measured miR-181a/b expression in human samples; (iii) it investigated the association between miR-181a/b and survival outcome.Further, the candidate articles were manually screened by 2 authors (S Lin and L Pan) independently and were excluded if they were: (i) review articles or letters; (ii) non-English articles; (iii) investigation of a set of miRNAs but not miR-181a/b alone; (iv) nondichotomous miR-181a/b expression levels; (v) absent of key information such as hazard ratio (HR), 95% CI and P value.We also e-mailed the authors of some studies for addition information and data needed for our meta-analysis.The entire process was supervised by the third part (S Wang).Any disagreements were resolved immediately by four authors (S Lin, L Pan, S Guo and J Wu) after discussion.
Quality Assessment and Data Extraction
The quality of all eligible studies was systematically assessed.
The key components of a qualified study should include the followings: (i) clear definition of the study population; (ii) clear definition of the type of carcinoma; (iii) clear definition of the study design; (iv) clear definition of the outcome assessment; (v) clear definition of the measurement method of miR-181a/b; (vi) clear definition of the cut-off of miR-181a/b expression and (vii) sufficient period of follow-up time.The study lacks any point mentioned above will be excluded aiming to increase the reliability of the meta-analysis.A flowchart of the studies identifying process is presented in Figure 1.The following information was carefully deprived from the full texts of eligible articles: (i) publication details: first authors' surname, publication year; (ii) characteristics of studies: origin country, sample size and tumor types; (iii) miR-181a/b assessment methods and the cut-off definition; and (iv) HR of miR-181a/b expression for overall survival (OS) as well as corresponding 95% confidential interval (CI) and P value.If the HR and CI were not reported directly, the total observed death events and the numbers of patients in each group were extracted to calculate HR and its variance indirectly [16].If only Kaplan-Meier curves are available, data was extracted from the graphical survival plots.In this case, after dividing the time axis into nonoverlapping intervals, log HR and its variance for each interval were calculated.These estimated values were combined in a stratified manner to obtain the overall HR and 95% CI [16].We presumed that miR-181a and miR-181b may have the same effect on patients' survival.In the studies which reported the HR data of miR-181a and miR-181b respectively in a same set of patients, the combined HR was estimated by simply taking the square-root of multiplying two HR data.If the author reported both univariate analysis and multivariate analysis to get the HR, the result of multivariate analysis including other variables should be preferably taken because it could be more accurate.
Statistical Analysis
Firstly, HR with 95% CI was used to combine the pooled data.The statistical heterogeneity of studies was tested with the chisquare based Q-test, and absence of heterogeneity across studies was identified, then the fixed-effects model (the Mantel-Haenszel method) was used.Otherwise, the random effects model (the DerSimonian and Laird method) was performed.We also quantified the effect of heterogeneity using I 2 statistic measuring the degree of heterogeneity.I 2 value ranges from 0% to 100% (I 2 = 0-25%, no heterogeneity; I 2 = 25-50%, moderate heterogeneity; I 2 = 50-75%, large heterogeneity; I 2 = 75-100%, extreme heterogeneity) [17].Secondly, evidence of publication bias was analyzed by the methods of Begg plots and Egger test (p,0.05was considered representative of statistically significant publication bias).Finally, sensitivity analysis was carried out by investigating the influence of a single study on the overall HR.All of the analyses were carried out using STATA v11.0 (Stata Corp., College Station, TX).
Results
Data were extracted from 11 studies with a total of 1252 patients from United States, China, Japan and Chinese Taiwan [10,[18][19][20][21][22][23][24][25][26][27].All of them were retrospective in design.The types of cancers in these studies included solid tumors (colon cancer, NSCLC, OSCC, astrocytoma, gastric cancer and breast cancer) and hematological malignancies (cytogenetically normal AML, cytogenetically abnormal AML and CLL).Most of the studies used quantification real-time PCR to measure the expression level of miR-181 (TaqMan: 6 and Stem-loop: 2), and others used microarray method.Two studies both investigated 2 independent populations as a training set and a validation set [19,24].Li et al. and Zhu et al. examined MiR-181a and miR-181b respectively in the same population [19,27], whereas Yang et al. studied the patients with both miR-181a and miR-181b overexpression [26].Notably, the cut-off of miR-181a/b were different in the studies, applying median value in 6 studies, and the mean, the highest tertile, the highest value of 95% confidence interval as well as 3fold in other studies.(Table S1).
Table 1 shows the main results of this meta-analysis.At first, we performed analysis of miR-181a/b expression and OS in a variety of cancers and it appeared extreme heterogeneity (I 2 = 76.9%,p,0.0001) between the studies, so that a random effects model was applied to calculate a pooled HR (0.86, 95% CI: 0.629-1.184,p = 0.356) which was not statistically significant.And next, the subgroup analysis of hematological malignancies (n = 566) was carried out.The results showed only moderate heterogeneity between the studies of hematological malignancies (I 2 = 36.1%,p = 0.166) and the pooled HR was more significant than any single HR of each study (0.717, 95% CI: 0.631-0.816;p,0.0001).Another subgroup analysis of miR-181a (n = 818) showed that the large heterogeneity existed (I 2 = 62%, p = 0.015) and the pooled HR was statistically significant (0.698, 95% CI: 0.532-0.914;p = 0.009).Both pooled HRs ,1 indicated that downregulated miR-181a and miR-181a/b may be associated with poor overall survival outcome in various cancers and hematological malignancies respectively (Figure 2).Finally, publication bias of the included studies was evaluated by Begg plots and Egger test.As shown in Figure 3, the Begg plots were almost symmetric and the Egger's regression intercept was 0.509.There was no evidence for significant publication bias in this meta-analysis.Meanwhile, the sensitive analysis was performed by omitting one study at each time to measure its effect on the pooled HR.As presented in Figure 4, no individual study influenced the overall HR dominantly.
Discussion
The present meta-analysis indicated that downregulated miR-181a/b could predict poor OS in patients with hematological malignancies, although the expression level of miR-181a/b was not significantly relative to OS in patients with various cancers.However, it should be circumspect to make a verdict of the association with miR-181a/b and human cancers, because there are still several issues should be considered.First, since the number of studies for each type of human cancers was less than 5, it might weaken the reliability of our results.A well-designed clinical study with large cases of each specific cancer should be performed in the future to validate the relationship between miR-181a/b expression level and prognosis of cancerous patients.Second, dislike oncogenes or tumor suppressor genes, miRNAs are generally associated with tumorigenesis through regulating the expression of hundreds of targeted mRNAs.Whether miR-181a/b is oncogene or tumor suppressor depends on which targeted genes are dominantly under the family's control.Third, the precondition of our study is that miR-181a and miR-181b are co-expressed in cancers and playing an important role together in tumorigenesis.
However, the subgroup analysis showed that low expression level of miR-181a, but no miR-181b (data no show), was significantly relative to poor survival outcome in patients.The similarity in primary sequences between miRNAs is not equal to the similarity in their functions.For instance, miR-181a and miR-181c have only one-nucleotide difference in their mature miRNA sequences, but only miR-181a can promote CD4 and CD8 double-positive (DP) T cell development, when ectopically expressed in thymic progenitor cells.The distinct activities of miR-181a and miR-181c are largely determined by their unique pre-miRNA loop nucleotides [28].Although the seed region of miR-181a and miR-181b is highly aligned and most of their predicted targeted genes are overlapped, they might act differently in different kinds of cancers.Future study of combination and separation of miR-181a/b should be performed.
We also concede that there are several limitations in our metaanalysis.First, the heterogeneity existed in our meta-analysis and was probably due to the differences in baseline demographic characters of population, the tumor types, the disease stages, the cut-off value of miR-181 expression, the duration of follow-up, etc.When we divided the studies into solid tumors and hematological malignancies, the heterogeneity was markedly reduced.Second, although there was no significant evidence of publication bias in this meta-analysis, cautions should be taken because only studies published in English were selected, which could definitely cause language bias.And the tendency for journals to publish positive results could also make certain bias.
In recent years, miR-181 family has been found associated with tumorigenesis.In differentiated mouse embryonic stem cells (ESCs), miR-181a is one of the miRNAs that post-transcriptionally downregulate and maintain the low protein expression of silent mating-type information regulation 2 homologue 1 (SIRT1), which regulates processes such as transcription, apoptosis and muscle differentiation by deacetylating key proteins [29].Studies also reported that miR-181a is frequently down-regulated in OSCC and may function as an OSCC suppressor by targeting on K-ras [9].Likewise, miR-181b can enhance matrix metallopeptidases (MMP) 2 and MMP9 activity and promoted growth, clonogenic survival, migration and invasion of hepatocellular carcinoma (HCC) cells by modulating a tumor suppressor, the tissue inhibitor of metalloprotease 3 (TIM3).Depletion of miR-181b inhibited tumor growth of HCC cells in nude mice [30].Further studies reported that overexpression of miR-181b could regulate tamoxifen resistance in breast cancer by downregulating TIM3 and facilitating growth factor signaling [31].Downregulation of miR-181b in human gastric tissues could elevate the expression of cAMP responsive element binding protein1 (CREB1) that suppressed the proliferation and colony formation rate of gastric cancer cells [32].Together, these findings suggest that miR-181a/b plays an important role in human tumorigenesis.
MiR-181 preferably expresses in hematopoietic cell lineages and is involved in erythropoiesis, granulocytic and megakaryocytic differentiation [33][34][35][36].Cuesta et al. [37], found that miR-181a inhibited the translation of the cell cycle inhibitor p27 via 2 functional miR-181a-binding sites in the 39UTR of p27 and downregulation of miR-181a would cause cell cycle arrest and full differentiation of myeloid cells.MiR-181a could prompt CD4 and CD8 double-positive (DP) T cell development, when ectopically expressed in thymic progenitor [28].In situ hybridization (ISH) in tonsil tissue sections showed gradual decrease of miR-181b staining intensity from the dark to the light zone in germinal center B cells [38].These findings indicated the significance of miR-181 in human hematopoietic development.
The importance of miR-181 in hematopoiesis leaded most studies to focus on the role of miR-181 family in hematological malignancies.The pooled HR (0.717, 95% CI:0.631-0.816) of our meta-analysis showed that low level of miR-181a/b expression was significantly relative to poor prognosis in patients with hematological malignancies, suggesting that miR-181a/b might act as tumor suppressor.For example, miR-181a was downregulated in chronic myeloid leukemia (CML) and overexpression of miR-181a effectively suppressed cell growth and induces apoptosis in CML cell line K562 [42].Downregulation of miR-181a/b resulted in the increasing of TCL1 and BCL1 which are both the lymphoid proto-oncogenes [39,40].In line with this, the downregulation of miR-181a in CLL samples also resulted in the significant overexpression of pleomorphic adenoma gene 1 (PLAG1) [41].The Luciferase reporter and western blot assays had confirmed that RalA was a direct target of miR-181a.However, other studies supported the oncogene role for miR-181a/b.For example, high expression of miR-181a could lead to decreasing of a proapoptotic protein, Bim, in T-cell lymphoma and non-Hodgkin lymphoma cell lines [43,44].MiR-181b was downregulated in acute promyelocytic leukemia (APL) cell line NB4 after giving treatment with pharmacological does of all-trans retinoic acid (ATRA) [45], whereas high expression miR-181a could sensitize APL cell lines HL-60 to Ara-C treatment [46].These paradoxical phenomena could be explained by the fact that ATRA induced APL cells differentiation but Ara-C promoted cells apoptosis.It is still unclear that how miR-181a/b exactly works in hematological cancers.Nevertheless, miR-181a/b could be a useful biomarker at least.
Since miRNAs have unique expression profiles in cancerous samples compared to normal tissue, they are considered as potential biomarkers for prognosis of cancers.We show in here that miR-181a/b is very promising for prognosis prediction in hematological malignancies.Samples of patients in the hematological cancers can be easily gained from peripheral blood, making the feasible life-long monitor of miR-181a/b for those patients.However, several problems should be resolved before miR-181a/b could become a routine clinical application in the future.First, lack of abundant miR-181a/b expression data in global population makes it difficult to set a standard value for the measurement of miR-181/b.Second, a group of miRNAs might be better than a single miRNA.Marccuci et al. [47], detected a set of miRNAs in AML patients (included miR-181a/b) and calculated the miRNAs summary value as a compound predictor to evaluate miRNA expression and the 5 years event-free survivals of patients.More studies should be carried out to compare the prognosis power between miR-181a/b and a group of selective miRNAs.
Conclusion
Our meta-analysis, representing a quantified synthesis of all published studies of miR-181a/b, has shown that the low- expressed miR-181a/b is significantly associated with poor survival in patients with hematological malignancies.More clinical investigations should be conducted before miR-181a/b can be implemented into the routine clinical management.However, it is still unclear that miR-181a/b acts as a tumor suppressor or as an oncogene.Our study could aid in the delineation of this issue by demonstrating miR-181a/b performance in clinic and provide clues for future investigations.
Figure 2 .
Figure 2. Forest plots of studies evaluating HR of overall survivals comparing high and low miR-181 expression.(a) Analysis of miR-181a/b expression in a variety of cancers, (b) analysis of miR-181a/b expression in hematological malignancies, (c) analysis of miR-181a expression in a variety of cancers.doi:10.1371/journal.pone.0059532.g002
Figure 3 .Figure 4 .
Figure 3. Begg's funnel plot for publication bias analysis.Each point represents a separate study, lnhr is natural logarithm of HR, and horizontal line represents the mean effect size.doi:10.1371/journal.pone.0059532.g003
Table 1 .
Main results of meta-analysis.Analysis of the association of miR-181a/b and OS in a variety of cancers; b Subgroup analysis of the association of miR-181a/b and OS in hematological malignancies; Subgroup analysis of the association of miR-181a and OS in a variety of cancers; d The P value was calculated using the fixed-effects model (the Mantel-Haenszel method).doi:10.1371/journal.pone.0059532.t001 a c
|
v3-fos-license
|
2023-08-09T15:03:20.565Z
|
2023-08-07T00:00:00.000
|
260716808
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2023.1204861/pdf",
"pdf_hash": "ae79e9d0384f3bf0f268bbb995c910bd31e9773c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46441",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "d279bdaf644a79879c9aef4d01870dc4389b0d7d",
"year": 2023
}
|
pes2o/s2orc
|
Recent technological innovations in mycelium materials as leather substitutes: a patent review
Leathery mycelium materials, made from the vegetative part of filamentous fungi, have garnered significant interest in recent years due to their great potential of providing environmentally sustainable alternatives to animal- and plastic-based leathers. In this systematic patent review, we provide an in-depth overview of the fabrication methods for mycelium materials as leather substitutes recently described in patents. This overview includes strategies for fungal biomass generation and industrial developments in the sector. We discuss the use of various fungal species, plasticizers, crosslinking agents, and post-processing techniques, thereby highlighting potential gaps in scientific knowledge and identifying opportunities, challenges, and concerns in the field. Our analysis suggests that mycelium materials have significant potential for commercialization, with a growing number of companies betting on this new class of biomaterials. However, we also reveal the need for further scientific research to fully understand the properties of these materials and to unlock potential applications. Overall, this patent review delineates the current state of the art in leathery mycelium materials.
Introduction
As society looks for environmentally conscious solutions to tackle issues related to ecological destruction and resource scarcity by seeking to broaden the sustainable material base, researchers are turning to biology for inspiration in the design and engineering of advanced materials. One area of growing interest is the development of mycelium materials, which are made from the vegetative part of filamentous fungi. These materials can be grown on a wide variety of agricultural and industrial organic waste-or side streams, which has led to a burgeoning interest in experimentation with new mycelium species and the design of new fermentation setups. Many industries, from construction to chemicals and textiles manufacturers, are pressured towards biobased and circular economy strategies due to consumer demand, evolving environmental regulations, and industry-imposed targets, resulting in a rising interest in biomaterials. In addition, mycelium materials can offer a relative low-cost and environmentally sustainable alternative to some petroleum-based materials (Stelzer et al., 2021;Livne et al., 2022;Williams et al., 2022).
While the initial focus was placed on the production of lignocellulosic mycelium composites, generating dense or semi-dense solid materials with potential application in the construction and packaging industries (Jones et al., 2017;Abhijith et al., 2018;Elsacker et al., 2020), a recent shift in interest has occurred towards the development of new OPEN ACCESS EDITED BY Marco Contardi, Italian Institute of Technology (IIT), Italy processes in which the fungus is grown as a biological tissue or mat on top of a liquid and/or solid substrate, or as fungal biomass in submerged liquid fermentation (Jones et al., 2020a;Gandia et al., 2021b;Vandelook et al., 2021). These materials consist mainly of fungal biomass, have textile-, leather-or foam-like properties and can display functionalities as a leather-like substitute material (e.g., clothes, bags, and seat covers). There is an increasing need for environmentally friendly alternatives because traditional leather production has its limitations. Leather is a by-product of the animal farming business, which ties it to an industry responsible for a large carbon footprint as well as other ecotoxicities (water pollution, human health, and land-use impacts) and the intensive use of hazardous chemicals in the hide tanning process. Conversely, plastic-based leathers, such as "PU leather" or "vegan leather", have a lower carbon footprint than animal leather during their production, but they are dependent on fossil resources and have negative environmental effects (microplastic pollution, landfill, and ocean accumulation). Consequently, the increasing demand for sustainable materials has led to the development of "alternative leather" technologies based on mycelium and various other organic streams (e.g., Piñatex, Vegea or Fruitleather).
The recent release of multiple prototypes of leathery mycelium materials is indicative for a maintained commercial interest. Since 2019, there is a clear visible increase in patent filings on mycelium materials, fungal fermentation technologies and functionalization strategies, aimed towards the commercialization of animal and synthetic leather substitutes. Consequently, a new ecosystem of companies betting on mycelium is emerging, following in the footsteps of early adopters and current industry leaders like Ecovative, Mycowork, and Mogu. This has led to aggregated knowledge clusters in the patent landscape, which tends to focus on the effectiveness and useability of the technology in frame with the economic value, rather than on the generation and dissemination of a significant body of scientific knowledge and data.
In this systematic review, the primary objective is to analyze the patent landscape in the field of fungal-based leather-like materials and to provide insights into the innovative technologies and approaches used in the fabrication of fungal materials as leather substitutes. We discuss recent trends in fungal fermentation techniques and industrial developments in the sector. An overview is provided of the use of various fungal species, plasticizers, crosslinking agents, and post-processing techniques. We also identify potential gaps in scientific knowledge, as well as opportunities, challenges, and concerns in the field. By focusing primarily on patent literature, our aim is to offer an applicationfocused review of the advancements made in the industrial sector of mycelium materials. Due to the scope of this paper, we do not include a broader set of references from the scientific literature. Interested readers are therefore directed to existing literature reviews for further information (Jones et al., 2020a;Gandia et al., 2021b;Vandelook et al., 2021;Peeters et al., 2023).
Patent search
The Espacenet website was used to search for patents based on a keyword search approach in the patent titles and abstracts. The following combinations of keywords were used: "mycelium" and "leather", or "mycelium" and "mat", or "fungal" and "leather", or "fungal" and "mat", or "mycelium" and "material", or "nonwoven" and "mycelium", or "mycelium" and "textile", or "mycelium" and "flexible" and "material". The search was focused on the IPC class C12N1/14, which is used for fermentations with fungi. The time interval was limited to patents filed or granted in the period from 2009 to 2023. Results were screened for relevance by manually reading the abstracts or entire patents, omitting patents concerning solid mycelium composites, food or medical applications. The remaining patents were additionally validated with Google Patents to analyze the chronology of events and the countries where they were granted. The patent search was conducted in February 2023 and a total of 36 patents were selected that cover mycelium leather-like materials. Patents that were not included in the Espacenet database or patents that did not mention the above keywords in the title or abstract or that were abandoned by the applicants, were not used in this study.
Overview of patents on mycelium leather-like materials
In this study, a selection of 36 patents was made that cover fungal-derived materials with the intended application as a leather substitute or as a textile or fabric. Patents relating to the use of rigid mycelium composites (Cerimi et al., 2019) or fruiting bodies to make amadou and other related materials are not addressed (Gandia et al., 2021b).
Interestingly, two patents were already granted to the applicants Ford and Ecovative in 2011 and 2012. Although the methods provided do not specifically focus on a leather replacement use, as all patents beginning in 2019 do, these two patents address the production of mycelium mats. Ecovative's patent, in particular, details how mycelium would naturally grow over the surface of a nutrient-rich fluid, solid, or solid-liquid boundary (woven or matt fiber atop nutritional broth) and how it may be harvested for thin film applications (Mcintyre et al., 2012). Afterwards, these mycelium sheets can be processed (cut, pressed) to graft desired twodimensional characteristics on individual sheets (Mcintyre et al., 2012).
Starting at the end of 2019, the number of patent filings describing mycelium leather-like materials has increased considerably ( Figure 1A). As of today, only 11 of the 36 relevant patents have been granted, with the majority going to applicants in the United States. With a total of 25 remaining pending applications, it is likely that the number of granted patents on mycelium leatherlike materials will continue to increase in the near future ( Figure 1B). Ecovative (5 granted and 3 pending applications) and Mycoworks (3 granted and 6 pending applications) now hold the bulk of patents, followed by the Chinese academic institute Gansu Academy Sciences Institute Biology (2 granted applications) ( Figure 1C). The remaining patent applications, which are mostly pending, are divided among different companies, each of which has one or two applications. When describing a material as being a mycelium-based material, it is expected that the majority of the material's composition is derived from fungal biomass. Therefore, the choice of the fungal species can significantly influence the production process and the final material properties by means of their biological characteristics. The examination of which species were mentioned in the selected patents involved a meticulous analysis of both the claims and the descriptions provided within each patent. Fungal species were either explicitly stated in the claims or were only listed as examples without additional information in the claims section.
According to the hereabove mentioned elements, a total of 69 organisms were identified across granted and non-granted patents (Table 1). Ganoderma (mentioned in 5 patents) and Trametes (mentioned in 4 patents) are the most commonly listed genera in patent descriptions, followed by Fomes, Fusarium, Pleurotus, and Schizophyllum (each mentioned in 3 patents). Except for Fusarium, which is an Ascomycete, all of the abovementioned species are members of the Basidiomycetes.
Upon critical evaluation of the patents, it must be recognized that the mere mention of certain species in the claims or description of the patents do not guarantee their effectiveness or compatibility with the production and application of mycelium materials. This information is only a starting point for a more in-depth analysis and comparison. Furthermore, given the large phylogenetic diversity of filamentous fungi (Hawksworth and Lücking, 2017), it could be envisaged that there is still an unexplored wealth of species with the potential to be used in the production of mycelium materials with different properties and unknown advantages. Besides investigating the diversity of natural strains, genetic engineering of already good performing strains can be another approach to improve material characteristics (Vandelook et al., 2021;Bayer et al., 2022).
Fermentation techniques, apparatuses and systems
There are three main strategies to generate fungal biomass intended for the application of leather-like mycelium materials: solid-state surface fermentation (SSSF), liquid-state surface fermentation (LSSF), and stirred submerged liquid fermentation (SSLF). While SSSF and LSSF fermentation techniques allow the mycelium to be grown as one whole sheet, SSLF results in a lowercohesion broth with slurry or pellets that requires further processing to be formed as a coherent piece of material. SSSF involves introducing a fungal organism to a solid growth substrate, allowing for the development of mycelial tissue at the surface of the substrate (Figure 2A). This method often utilizes lignocellulosic substrates, similar to those utilized in the cultivation of edible mushrooms, due to their affordability and availability. LSSF can either use lignocellulosic fibers mixed into a liquid broth or uses a completely dissolved nutrient solution, resulting in the growth of a fungal tissue at the liquid-air interface in a static setup ( Figure 2B). SSLF entails the cultivation of submerged fungal biomass in a liquid medium using a bioreactor, bubble column reactor, or shake flask setup ( Figure 2C).
Genus Species References
Psathyrella Frontiers in Bioengineering and Biotechnology frontiersin.org 05 easily separated from the substrate by cutting off the top mycelium layer. The remaining substrate can be used to produce lignocellulosic mycelium composites for different applications. The substrate is either pre-grown with a fungal inoculum in bags or introduced directly into an enclosed mold for incubation to produce a pure mycelium layer.
The patent of Ross et al. (Mycoworks) describes a strategy that involves embedding a porous intermediate membrane, made of a fungus-resistant polymer, on top of the solid substrate (Ross et al., 2020b) (Figure 2D). The hyphae grow through this intermediate layer, allowing easy separation of the fungal material from the nutritive substrate. The growth direction of the organism can potentially be guided by electrical actuation (Ross et al., 2020b). In the SSSF process developed by Mycoworks ( Figure 3A), an intermediate layer is placed at the bottom of the mold. The inoculated substrate is then packed on top, and mechanical pressure is applied to flatten the layer. The mold is covered and incubated for 2-4 days to facilitate the growth of the fungus inside the substrate and through the intermediate layer. The solid substrate is then removed from the mold and the mold is then flipped, placing the intermediate layer on top. The substrate block is returned to the mold and incubated, stimulating hyphal growth away from the substrate and into the air. Upon the visible growth of the mycelium through the intermediate layer, a contaminant-free cellulose-based textile (e.g., cotton) is placed on top of the hyphae to form a composite material with improved material properties. Optimal growth conditions include a humid environment (humidity range of 20%-100%), high oxygen content, a temperature of 22°C-25°C, and total darkness. The fungus continues to grow over the next 2 weeks, during which daily manipulations, such as flatting the mycelium sheet with a rolling pin in different directions, are performed ( Figure 3B). This process can be repeated with multiple layers to enhance bonding. Once the desired thickness is reached, the intermediate layer is delaminated from the solid substrate, followed by several post-growth processing steps.
The method developed by Bentangan et al. (Mycotech) involves growing a mycelium layer on a solid substrate, but without utilizing an intermediate layer to separate the mycelium from the substrate or rolling the hyphae in different directions (Bentangan et al., 2020). As a result, residual substrate particles can remain attached at the bottom of the mycelium sheet, which must be cleaned with a brush. To preserve the mycelium tissue and prevent rot, it is treated with salt and boiled in water (Bentangan et al., 2020).
The fermentation strategy developed by Kaplan-Bie et al. and Greetham et al. (Ecovative, 2023) involves stimulating abundant aerial mycelium growth on top of a solid substrate, under high CO 2 concentration, directed airflow and micro-droplet deposition ( Figure 4A) Kaplan-Bie et al., 2022). Typically, the CO 2 concentration is kept at 5%, the temperature can fluctuate between 30°C and 32°C, the airflow rates vary between 2.8-10 m 3 /min, and the mean mist deposition rate is less than or equal to about 5 microliter/cm 2 /hour . Under these conditions, the undifferentiated mycelium grows into the void space, which is then separated from the substrate and dried. This results in a thick foam-like mycological biopolymer composed entirely of fungal mycelium ( Figure 2E). Open trays containing the nutritive substrate and organism are placed in an incubation chamber ( Figure 4B) where lateral or perpendicular airflow with high carbon dioxide content is directed above the trays. Specific airflow velocities can produce different aerial mycelium densities (Kaplan-Bie et al., 2022). A mist containing solutes (e.g., minerals, proteins or carbohydrates) is circulated through the incubation chamber and deposited onto the growing tissue. The controlled environment established in the process allows the growth and development of the mycelium to be influenced by applying various morphological modifiers. Small variations in parameters such as relative humidity and airflow speed can noticably affect the properties of the resulting fungal biopolymer. For example, adjusting the relative humidity from 99% to less than 98% for 4-72 h induces densification of the fungal tissue, which can then be grown in a less dense manner by raising the humidity back to 99%, resulting in a multi-layered density biopolymeric foam. The patent also states that the tensile strength of the mycelium material increases with an increased airflow speed (Kaplan-Bie et al., 2022). Similar to Ross et al.'s invention, a non-substrate porous layer can also be placed on top of the substrate to reinforce the mycelium material or to facilitate the removal of the mycelium tissue. After being removed from the substrate, the material is further processed to improve density and/or mechanical strength.
Liquid-state surface fermentation
LSSF, as described by (Mogu), begins by inoculating and growing a filamentous fungus on a sterilized or pasteurized solid lignocellulosic substrate, which may be supplemented with seeds, seed flour, starch powder, and/or minerals . A liquid medium, such as Malt Extract Broth (MEB), Malt Yeast Extract Broth (MYEB), or Potato Dextrose Broth (PDB), is added to the solid substrate at a ratio of 2%-5% medium per total weight of solid substrate. The incubation is performed in static and aerobic conditions, in the dark and at temperatures between 20°C and 30°C. This growth phase continues until the fungus fully colonizes the substrate, typically taking 5-15 days.
In a second phase, a homogeneous, viscous fungal slurry is prepared by blending the colonized lignocellulosic substrate with sterile water . The colonized medium is mixed with water at a ratio of 2 g of substrate per 10 mL of water. The resulting semi-solid substrate is then placed in a flat container ( Figure 5A) and incubated until a fungal tissue forms on the top surface of the slurry ( Figure 2C; Figure 5B). Incubation is carried out in static and aerobic conditions at a constant temperature. Depending on the fungal species, a constant CO 2 concentration of 2000-2,500 ppm maintained. After the mycelium reaches the desired thickness and density, typically after 10-18 days, it can be harvested by peeling it off the digested slurry underneath and rinsing its surface with water ( Figure 2F; Figure 5C).
Fungal sheets can also be combined and re-incubated for at least 2 days to form a multilayer fungal material. During this process, new hyphae will grow and form a natural bond between the different layers. The mechanical properties of the mycelium material can be enhanced by adding a porous material, such as a layer of fibrous textile (e.g., hemp, linen, or cotton) or a polymer, on top of the surface. This allows the fungi to grow into the layer without digesting it. Additionally, including foaming agents like carrageenan (0.1%-1%) or albumin (0.1%-1%) in the creates a foamy substance with many air bubbles, which benefits the Frontiers in Bioengineering and Biotechnology frontiersin.org growth of the mycelium. At this stage, other additives such as cellulose acetate, chitin or chitosan, corn zein and starch, sucrose, dextrose, malt extract, or molasses may also be added. Finally, the fungal material undergoes further post-growth processing.
Stirred submerged liquid fermentation
The SSLF strategy, described in a patent by Szilvay et al. (2021) (VTT), involves cultivating a fungal species in a stirred submerged liquid suspension (Figures 2C, G). By allowing the organism grow inside a bioreactor, bubble column reactor, or shaking flask setup (at 200 rpm), a large amount of fungal biomass can be produced (Szilvay et al., 2021). An important difference between SSLF and the other fermentation techniques is the requirement of active stirring to submerge the organism throughout the cultivation process, in contrast to the more passive cultivation strategy used in static surface fermentation that requires minimal energy input.
It is preferable to use water-soluble nutritional sources to create a homogenous liquid medium for an efficient stirring, although small particles could also be used to offer an anchoring point for the fungal organism. One example of added small particles to the stirred solution are nanocellulose fibrils (Szilvay et al., 2021). This method, previously reported in scientific literature in the context of aerogel production by incubation of a fungal species with nanocellulose (Attias et al., 2020) can be combined with polymers, fibers, and coloring agents during or at the end of the fermentation process (Szilvay et al., 2021).
After the SSLF process is completed, the cultured mycelium is filtered out of the liquid growth medium, homogenized and washed with water. Crosslinking agents, such as citric acid, can be added, as well as plasticizers, either during stirring or after the washing step. The processed mycelium is then dried, optionally through a heat treatment. Mycelium sheets can be produced using lyophilization and/or vacuum filtration through membranes (Appels et al., 2020;Attias et al., 2020). Semidried films are carefully transferred to a frame for constrained drying to prevent shrinkage and wrinkling (Attias et al., 2020) or covered with cellophane on a flat surface (Appels et al., 2020).
Post-growth plasticizing and crosslinking strategies
If left untreated, mycelium materials will become stiff and brittle when fully dried (Appels et al., 2020). Therefore, a plasticizing agent is applied to keep the sheets flexible. Typically, any kind of polyols can be used to plasticize mycelium-based materials (composed of glucan, chitin and/or chitosan biopolymers), such as propylene glycol (Ross et al., 2020b). Alternatively, the plasticizer can also be selected from sugar alcohols, epoxy esters, ester plasticizers, glycerol esters, phosphate esters, terephthalates, leather conditioners, acetylated monoglycerides, alkyl citrates, epoxidized vegetable oils, methyl ricinolate, or other common polymers plasticizers (Szilvay et al., 2021).
Hyphal cell walls can be chemically crosslinked, which increases the strength and stiffness of the material while preserving its extendibility. For example, citric acid was found to react with the glucan hydroxyl groups present in the cell wall, causing crosslinking between the cell walls of the neighboring hyphae crosslink together (Szilvay et al., 2021). The crosslinking agents can be polycarboxylic acid, tricarboxylic acid, dicarboxylic acid, glutaraldehyde, and tannin (such as pyrogallols and glutaraldehyde). In another embodiment, the crosslinking target is an amine group present on the deacetylated chitin polymer (chitosan) (Ross et al., 2020b). Enzymes potentially expressed and secreted by the fungi during the cultivation process, such as oxidase and oxidoreductase, or laccase and tyrosinase, can also be used to crosslink tannins, lignin or Frontiers in Bioengineering and Biotechnology frontiersin.org 07 vanillin into the mycelium cell wall (Szilvay et al., 2021). Additionally, fungal strains producing crosslinking agents such as dicarboxylic and tricarboxylic acid can be used. Crosslinking can also be achieved through heat treatment ranging from 90°C to 150°C (Szilvay et al., 2021), for example, by pressing the mycelium sheets between hot plates.
An effective crosslinking strategy for mycelium materials, developed by Ecovative (2023), Kaplan-Bie (2018), involves the combination of an organic solvent solution (e.g., alcohol), a calcium chloride solution, and a phenol/polyphenol solution. The organic solvent enables penetration of the material, rinses away extracellular materials, denatures proteins and partially deacetylates chitin. The addition of phenols to the treatment solution acts as crosslinking agents, creating covalent bonds between the primary amine of deacetylated chitin (chitosan) and amines-and hydroxylgroups of amino acid residues, improving the mechanical properties of the final material. The use of salt as a humectant and antimicrobial agent ensures functional preservation of the material and provides added protection against microbial growth. The addition of methanol and calcium chloride further deacetylates chitin and mediates bond formation (Kaplan-Bie, 2018). In water, the salt can form ionic bonds with the same functional groups, further reinforcing the structure of the mycelium material. The steps in this technique are as follows (Kaplan-Bei, 2018). First, a solution of 10 g/ L tannic acid powder and water is prepared, into which the mycelium material is immersed for 7 days. The mycelium material is then placed in a bath of 150 g/L salt and 100% alcohol (e.g., isopropyl, ethanol, methanol) for up to 7 days before being repeated. The material is then taken from the water and pressed between rollers. It is immersed in 100% alcohol for 1 day before being pressed again. After that, the tissue is allowed to air dry before being treated with a plasticizer, such as a 20 g/L glycerin or sorbitol solution in water, to achieve the desired softness and flexibility (Kaplan-Bie, 2018).
Mycoworks' crosslinking strategy primarily acts on chitosan, which has easily reactive primary amine groups that form amide bonds during crosslinking (Deeg et al., 2017;Chase et al., 2019). To partially deacetylate the chitin within the fungal material and chitin nanowhiskers, they are submerged in an aqueous solution of 40% by weight of sodium hydroxide at 80°C for a time period ranging from one minute to ten hours (Chase et al., 2019). This process can achieve a desired degree of acetylation from 1% to 50%. After this step, the fungal material is impregnated with chitin nanowhiskers by soaking and agitation. The addition of nanoparticles greatly enhances the performance of chitinous structures. Chitin nanowhiskers can be used to crosslink the primary amine groups in chitosan and the blocked isocyanate crosslinker hexamethylene-1,6-di-(aminocarboxysulfonate) (Araki et al., 2012). The nanowhiskers fill the gaps between the cell wall chitosan chains, forming a nanocomposite which is further strengthened through crosslinking (Deeg et al., 2017). The resulting chitosan-nanowhisker material has improved elastic modulus and tensile strength (Araki et al., 2012). Then, to crosslink the fungal material with a strength bearing element/backing (such as cellulosic textiles), commercially available genipin powder is dissolved in acetic acid (Chase et al., 2019). The naturally-derived cross-linking agents genipin is particularly promising as it is relatively efficient and has been extensively studied in relation to its cross-linking properties with chitosan (Mi et al., 2000). However, a side effect of using genipin is that it can cause a blue discoloration of materials (Deeg et al., 2017). The functional groups of genipin responsible for its cross-linking capabilities are the ester and the third carbon in the six-membered (Ross et al., 2020b).
Frontiers in Bioengineering and Biotechnology frontiersin.org 08 dihydropyran ring (Mi et al., 2000;Butler et al., 2003). Both of these functional groups react with the primary amine group in chitosan, forming connections between two chitosan chains (Mi et al., 2000;Butler et al., 2003). The resulting genipin mixture is then mixed with an undisclosed solution that has a pH between 2 and 3 (Chase et al., 2019). This second mixture is applied to the fungal material at a specific rate (ranging from 0.05% to 4% of the weight of the genipin polymer) to create a genipin-fungal mixture. This mixture is then incubated at 25°C for 40 min to several hours with agitation. Finally, the fungal material is rinsed with water.
Coating processing techniques
The physical properties of pure flexible mycelium material are sometimes insufficient and must be improved for diverse types of applications. Various coating techniques borrowed from the traditional leather and textile manufacturing industry can enhance the functional properties of mycelium materials. Coating agents such as dyes, resins, oils, paraffins, and polymers of natural or synthetic origins can be applied to the mycelium material using methods like air spray, curtain coating, or dip coating.
One technique that creates a protective barrier and simultaneously improves the mechanical properties of mycelium-based materials is the lamination process, which involves applying a thin polymeric film. Poly(L-lactic) acid (PLA), a sustainable polyester produced through microbial fermentation, is one example of a polymer used in this process.
Mycoworks (Scullin et al., 2020) has developed a lamination method that applies heat and pressure to bind a PLA film onto the surface of the mycelium-based material. This increases the strength and durability while preserving the flexibility and (bio-) degradability (Scullin et al., 2020).
In addition to these traditional coating techniques, Mycoworks and Mogu utilize a biodegradable polymer eluted in a mixture of water and an organic solvent (Scullin et al., 2020;Gandia et al., 2021a). The organic solvents utilized in these bio-based polymers can include alcohol, ketones, ethers, alkanes, cyclic ethers, glycol ethers, and phenylated solvents. The polymer is dispersed in water at a concentration ranging from 0.1% to 50% (Scullin et al., 2020). Water carries the polymer into the fungal matrix to fortify the hyphae, and improve abrasion resistance, colorfastness to crocking, dye transfer and water resistance (Scullin et al., 2020). Vegetable oils (soybean, sunflower, corn) can also be used as natural bioresources for the production of bio-based polyurethanes (Gandia et al., 2021a). Alternative biomass resources, such as fungal polyols, chitin, and glucans can be employed as well. Coatings typically include crosslinking agents and various additives such as surfactants, antifoaming agents, anti-gelling agents, anti-oxidation agents, thickening agents, plasticizers, flame retardants, pigments, and fillers (Gandia et al., 2021a).
The coating application process can be performed in multiple steps using spray or roll-transfer methods (Scullin et al., 2020). An initial layer of a polyurethane or acrylic containing medium is .
Frontiers in Bioengineering and Biotechnology frontiersin.org 09 applied to promote adhesion. Subsequently, additional layers containing color pigments, acrylics, silicones, resins, or polyurethanes are applied and dried. This process is followed by a heating and pressing, which can be achieved using a heated roller. The final step is drying the material between 50°C and 150°C to remove moisture.
Material properties
Currently, there is a need for a standardized characterization study gathering all available mycelium-based leather-like materials from commercial players in the field. While a few independent third-party characterization results have been made publicly available, testing standards can still differ between different countries (ASTM versus ISO) and among different compositions and finishes of leather-like product. Relevant material properties of leather, beside the thickness (ASTM D1813, ISO 2589) and apparent density (ASTM D2346, ISO 2420) are tensile strength and percentage elongation (ASTM D2209 and D2211, ISO 3376), tear strength (tongue tear) (ASTM D4704, ISO 3377), abrasion resistance (ASTM D7255, ISO 17076) and colorfastness when exposed to diverse conditions (wash, seawater, alkali, acid, wet-/dry crocking, perspiration, etc.). The testing and standardization process for these new materials is further complicated by the various types of ingredients and additional materials used in combination with mycelium biomass to achieve a finished leather-like product. Consequently, different standard tests are needed to accurately characterize the materials, considering that mycelium is almost always combined with different elements to fulfil its functional role as leather-like material.
The complexity of characterization, arising from the diverse array of material compositions, offers the advantage of tailoring the material properties specific to different applications. Myceliumbased materials serve as a basic platform that allows for tuning and engineering a multitude of characteristics and properties depending on the users' requirements. Customizable tensile strength, density, and fiber orientation are considered key selling points of these materials. Another superior advantage over traditional leather is the scrap-less confection process enabled by the material's homogeneity and ability to customize order size. Currently singular pieces of up to 60 m × 4 m are achievable (Ecovative, 2023).
For publicly reported values of different properties of commercial leathery mycelium materials, we refer to the work of (Jones et al., 2020a;Vandelook et al., 2021).
Perspectives and conclusion
This patent overview summarizes the most recent leather-like mycelium material patent publications and indicates that this field is currently experiencing a rapid expansion. There has been a significant rise in patents published since the end of 2019. Almost 80% of the granted patents originated from US applications. The examination of patents provides a frame of the processing methods and fermentation techniques that are being focused on for valorization purposes. Three main methods for growing fungal biomass were derived from the patents: solidstate surface fermentation, liquid-state surface fermentation, and stirred submerged liquid fermentation. In short, filamentous fungi can be grown on the surface of a solid growth medium or on the surface of a liquid medium. They can also be grown fully submerged in a liquid medium using a bioreactor or shaking flask setup. Additionally, different patents describe how hyphal cell walls can be chemically crosslinked, which often increases the material's strength and stiffness while preserving its extendibility. Finally, a variety of coating products can be used to cover the mycelium material, including spraying or laminating thin (water-based, bio-based) polymeric films.
In order to maintain a competitive edge in the early stages of commercialization, it became apparent throughout the patent search that applicants frequently employ vague and generic descriptions of their innovations. One notable limitation of the patents reviewed in this study is the absence of quantitative data and references to internationally recognized standards such as ISO (International Organization for Standardization). So far, the field of mycelium materials is still young, and it is clear from reviewing patent literature that private companies have been investing more time and effort into advancing research and stimulating innovation. Unfortunately, this means that much of the produced data is restricted or of limited access, which can have an inefficient effect on ongoing research. The lack of specific numerical values and standardized testing procedures makes it challenging to directly compare and evaluate the results reported in different patents. The absence of quantitative data hinders a comprehensive understanding of the performance and characteristics of the described mycelium materials and their applications. There's a need for more publicly funded research at the different levels, encompassing the production of mycelium materials to the consumer experience, and identify any negative or harmful elements or processes that could impact the environment or human health.
For instance, according to a recent study funded by MycoWorks, the growth process of Ecovative, which involves pumping large amounts of CO 2 into the aerial mycelium growth chamber, is likely to have a very high carbon footprint because it burns fuels at the source of CO 2 production and then releases that CO 2 into the atmosphere as the mycelium grows (Williams et al., 2022). Further research is required on the effects of mycelium material production on climate change and carbon footprint to enhance production methods and promote technical choices that will benefit large-scale facilities. Furthermore, most mycelium products now available on the market feature some sort of PU coating that ranges in thickness from 10 to 500 μm. These synthetic polymers are added to protect and or reinforce the mycelium material, but hinders the biodegradability in a natural environment. It will be necessary to continue advancements in sustainable coatings to get beyond the drawbacks of employing non-sustainable synthetic coatings.
In conclusion, based on the recent increase in patent applications, it is reasonable to expect substantial breakthroughs and a further increase in the number of patents on the topic of leather-like mycelium materials in the following decade.
Frontiers in Bioengineering and Biotechnology frontiersin.org
|
v3-fos-license
|
2019-04-23T13:22:57.096Z
|
2019-04-29T00:00:00.000
|
128075195
|
{
"extfieldsofstudy": [
"Materials Science",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1088/1402-4896/ab03a8",
"pdf_hash": "878b3038f97488d562cb78333396368637297505",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46442",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "dd22c090009d8bd8a3eaebcb4c696163ae2b599c",
"year": 2019
}
|
pes2o/s2orc
|
Numerical simulation for homogeneous–heterogeneous reactions and Newtonian heating in the silver-water nanofluid flow past a nonlinear stretched cylinder
The present exploration aims to deliberate silver-water nanofluid flow with homogeneous–heterogeneous reactions and magnetic field impacts past a nonlinear stretched cylinder. The novelty of the presented work is enhanced with the addition of Newtonian heating, heat generation/absorption, viscous dissipation, nonlinear thermal radiation and joule heating effects. The numerical solution is established via Shooting technique for the system of ordinary differential equations with high nonlinearity. The influences of miscellaneous parameters including nanoparticles volume fraction 0.0 ≤ ϕ ≤ 0.3 , magnetic parameter 1.0 ≤ Μ ≤ 4.0 , nonlinearity exponent 1.0 ≤ n ≤ 5.0 , curvature parameter 0.0 ≤ γ ≤ 0.4 , conjugate parameter 0.4 ≤ λ ≤ 0.7 , heat generation/absorption parameter ( 0.2 ≤ E c ≤ 0.8 ) , radiation parameter 0.7 ≤ K * ≤ 1.0 , Eckert number ( 0.1 ≤ E c ≤ 0.7 ) , strength of homogeneous reaction 0.1 ≤ κ 1 ≤ 1.8 , strength of heterogeneous reaction 0.1 ≤ κ 2 ≤ 1.8 and Schmidt number ( 3.0 ≤ S c ≤ 4.5 ) on axial velocity, temperature profile, local Nusselt number, and skin friction coefficient are discussed via graphical illustrations and numerically erected tabulated values. It is examined that the velocity field diminishes while the temperature profile enhances for mounting values of the magnetic parameter. An excellent concurrence is achieved when our obtained numerical calculations are compared with an already published paper in limiting case; hence dependable results are being presented.
The present exploration aims to deliberate silver-water nanofluid flow with homogeneousheterogeneous reactions and magnetic field impacts past a nonlinear stretched cylinder. The novelty of the presented work is enhanced with the addition of Newtonian heating, heat generation/absorption, viscous dissipation, nonlinear thermal radiation and joule heating effects. The numerical solution is established via Shooting technique for the system of ordinary differential equations with high nonlinearity.
Introduction
The feeble thermal conductivity of certain base fluids in numerous processes has been a big obstacle to shape a refined product. Certain techniques like pressure loss, abrasion and clogging were proposed by the researchers to overcome this deficiency but outcomes were not very encouraging. Nevertheless, the novel concept of nanofluid [1] (an amalgamation of suspended Nano metered sized metallic particles and some ordinary fluid such as oil, water or ethylene glycol) has revolutionized the modern industrial world. These nano-sized (<100 nm) metallic particles are comprised of metals, their oxides, and carbon nanotubes. Nanofluids possess certain unique properties that make them potentially worthwhile in numerous engineering and industrial heat transfer applications The flows of numerous fluids in attendance of magnetohydrodynamic (MHD) have extensive important applications in aerospace engineering, MHD generators, medicine, geothermal field, petroleum processes, nuclear reactors engineering and astrophysics. A reasonable number of explorations have been conducted featuring MHD fluid flows featuring an effort by Ramzan et al [21], they inspected the MHD flow of Jeffery nanofluid with radiation effects. Hayat et al [22] studied the MHD micropolar fluid flow with homogeneous-heterogeneous (h-h) reactions over a curved surface which is stretched in a linear manner. The study of MHD water-based nanofluid thin film using Homotopy analysis method past a stretched cylinder is considered by Khan et al [23]. Ramzan and Bilal [24] deliberated the 3D nanofluid flow in the attendance of MHD and chemical reaction. Ishak et al [25] utilized the stretching cylinder to examine the MHD flow. Qayyum et al [26] inspected the MHD stagnation point nanoliquid flow with Newtonian heat and mass conditions. Haq et al [27] scrutinized the MHD nanofluid flow with thermal radiation via a stretching sheet near a stagnation point. Bhatti and Rashidi [28] studied Hall Effect on an MHD peristaltic flow. Nadeem and Hussain [29] examined the MHD Williamson flow of nanoliquid past the heated surface. Ramzan et al [30] numerically studied the MHD micropolar nanofluid past a rotating disk. Ibrahim [31] used the linearly stretched surface to discuss the MHD nanofluid flow in the occurrence of melting heat near a stagnation point.
A direct proportionate between heat transfer rate and the local temperature is called Newtonian heating. It is also named as conjugate convective flow. It is utilized in many processes like designing of heat exchangers, conjugate heat transfer around fins and convective flows in which heat is absorbed from solar radiators by surrounding bounded surfaces etc. Merkin [32] was the first to consider four distinct categories of heat transfer phenomenon from wall to ambient fluid namely (a) Newtonian heating (b) conjugate boundary conditions (c) constant or prescribed surface heat flux, and (d) constant or prescribed surface temperature. Lately, various researchers have used the impact of Newtonian heating because of its broad practical applications [33][34][35][36][37][38][39][40].
A literature survey indicates that abundant research articles are available pertaining to nanofluid flows with combined impacts of the h-h reactions and MHDs past linear/nonlinear stretching surfaces. Comparatively, less research work is done with nanoliquids past cylinders and this choice gets even narrower if we talk about nanoliquid flows over nonlinear stretching cylinders. As far as our knowledge is concerned no study so far is conducted for the nanoliquid flow (with silver nanoparticles and water) past a nonlinear stretched cylinder with impacts of both h-h reactions, Newtonian heating, and nonlinear thermal radiation. Thus, our prime objective is to examine the nanoliquid flow past a nonlinear stretching cylinder with Newtonian heating, nonlinear thermal radiation, and h-h reactions. This exploration is unique in its own way and will attract a good readership. Numerical solution of the system of equations is acquired with the Runge-Kutta method by shooting technique. A comparative study with an already established result is also made and an excellent concurrence of both results is obtained.
Flow analysis
Consider an incompressible Ag-water nanoliquid flow past a nonlinear stretching cylinder with h-h reactions. In addition, nonlinear thermal radiation and Newtonian heating effects are also considered. It is presumed that a magnetic field = -( )/ B B x n 0 1 2 is operated along the radial direction. The induced magnetic field is overlooked due to our supposition of small Reynold number figure 1.
The homogeneous reaction for cubic autocatalysis can be communicated as given below: These reaction equations guarantee that reaction rate vanishes in the outer tier of the boundary layer.
Usage of the boundary layer approximation, the continuity, momentum, temperature and concentration equations are appended below: with allied boundary conditions Figure 1. Illustration of the flow geometry. The numerical value of specific heat, density, and thermal conductivity of nanoparticle (Ag) and conventional fluid water is given in table 1.
The measured forms for the thermo-physical properties are given as: , .
Here, it is anticipated that A 1 and B 1 are analogous. This assumption implies that D A and D B (diffusion coefficients) are equivalent i.e. δ=1 and because of this assumption, we have Using equations (16), (13) and (14) with corresponding boundary conditions take the form
Local Nusselt number and Skin friction factor
The dimensional form of the skin friction factor (C f ) and local Nusselt number (Nu x ) are described as where t w and q w are Consuming equations (10) and (20), in equation (19), we get j l q 2 1 x f x x nf f 1 2 2.5 1 2
Numerical technique
Shooting technique is employed to find out the numerical solution of equations (11), (12) and (17) with associated boundary conditions (15) and (18). While finding the numerical solution, the third and second order differential equations are converted to first order by utilizing new parameters. In shooting technique, we select an initial guess that satisfies the boundary conditions and the equation asymptotically. For the present problem, tolerance is taken as -10 . 7 A comparison of the present analysis with already published paper Qasim et al [41] in limiting case is given in table 2 and all numerical calculations depict a good agreement. reduction in the velocity of the fluid is noticed. The axial velocity diminishes with growing the value of nonlinear exponent n, this effect is depicted in figure 4. This is due to the fact that fluid particles are disturbed for larger values of n. Actually, more collision amongst fluid particles is witnessed that obstructs the movement of the fluid and ultimately a reduction in axial velocity is noticed. The impression of the curvature parameter g on axial velocity is depicted in figure 5. It is seen that the axial velocity is a growing function of g. In fact, increased values of the g result in squeezed radius and ultimately less contact area between the fluid and the cylinder is detected. This is the main reason behind the augmented axial velocity.
Temperature profile for several parameters
Figures 6-11 analyze the effect of curvature parameter, radiation parameter, conjugate parameter, and magnetic parameter on temperature field. Figure 6 exhibits the effects of curvature parameter g on the temperature profile. The fluid's temperature enhances for augmenting values of g. Actually, increase in heat transport is detected for augmented values of g, thus rise in temperature profile is witnessed. The impact of the conjugate parameter l on the temperature field is characterized in figure 7. It is determined that the temperature profile upsurges with mounting values of l. Higher values of l leads to stronger heat transfer coefficient and as a result more heat will transfer from the cylinder to the fluid. It is pertinent to mention that l ¥ exhibits the constant wall temperature and l = 0 indicates the insulated wall. Figure 8 is drawn to analyze the effect of radiation parameter K * on temperature profile. The temperature field enhances for augmented estimations of K * . In fact, for growing values of K * , the mean absorption coefficient decreases thus growth in radiative heat transfer rate is perceived. Figure 9 is portrayed to examine the impact of magnetic parameter M on temperature field. It is seen that for the growing values of magnetic parameter M, the temperature profile enhances. This is because of the verity that the Lorentz force augments owing to increased estimates of M thus impeding the fluid's movement. In this way, more collision between molecules of the fluid is observed and additional heat is produced thus increasing the temperature of the fluid. The impacts of heat generation/absorption parameter D c and Eckert number E c are illustrated in figures 10 and 11 respectively. The temperature of the fluid escalates for growing estimates of D c and E c , which is an obvious veracity.
Concentration profile for different parameter
Figures 12 and 13 are illustrated to portray the strength of homogeneous and heterogeneous reactions' impact on concentration profile respectively. It is observed that the concentration profile intensifies in both cases for growing values of h, and after a certain estimate of h, no impact on concentration distribution is seen for both cases of the strength of homogeneous and heterogeneous reactions. for various estimates of parameters. It is detected that the numerical value of the Nusselt number and skin friction coefficient are enhanced for growing values of the nonlinearity parameter n, curvature parameter g, and solid volume friction f. While for the M (magnetic parameter) the skin friction coefficient enhances and the local Nusselt number diminishes. Further, for the value of temperature ratio parameter N r and radiation parameter K * , the Nusselt number diminishes, while the skin friction coefficient is constant for temperature ratio parameter and slight change is observed for radiation parameter.
Concluding remarks
The problem of nanoliquid flow with silver nanoparticles and water (base fluid), is discussed with nonlinear thermal radiation with Newtonian heating past a nonlinear stretching cylinder. The effects of heterogeneous/homogeneous reactions with MHDs are also examined. The shooting technique is engaged to solve the nonlinear ODEs. The key points of the current effort are appended as follows: • For growing values of the solid volume fraction of nanoparticles, escalation in velocity field is observed. • The velocity profile diminishes, and temperature profile enhances for augmented values of the magnetic parameter. • For mounting values of radiation and curvature parameters, the temperature field enhances. • Concentration field decreases versus increasing values of the strength of heterogeneous and homogeneous reactions.
|
v3-fos-license
|
2021-05-04T22:05:40.625Z
|
2021-04-02T00:00:00.000
|
233612693
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1073/14/7/1971/pdf",
"pdf_hash": "8befaa2623365cedb2fca41601752b3d0c700978",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46445",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"sha1": "44cde5179eda4469a5aaf0972b5a6fc9e5c8f9d0",
"year": 2021
}
|
pes2o/s2orc
|
Green Building Adoption on Office Markets in Europe: An Empirical Investigation into LEED Certification
The goal of the paper is to evaluate the impact of selected factors on the adoption of LEED (Leadership in Energy and Environmental Design) green building certification in Europe. In the empirical part of the paper we track the fraction of LEED-registered office space in selected European cities, and assess the impact of selected socioeconomic and environmental factors on the certification adoption rate. This research contributes to the ongoing debate about the adoption of green buildings in commercial property markets. In this paper, we investigate factors affecting the adoption of LEED certification using the Arellano and Bond generalized method-of-moments estimator. Compared to prior studies, which relied on cross-sectional data, our research uses a panel approach to investigate the changes in green building adoption rates in selected European cities. Among the cities that are quickly adopting LEED are Frankfurt, Warsaw, Stockholm, and Dublin. The adoption process was not equally fast in Brussels and Copenhagen. Using the dynamic panel model approach, we found that the adoption of green building certification is linked to overall innovativeness in the economy and the perceived greenness of the city. Contrary to some previous studies we did not observe links between the size of the office market and the LEED adoption rate.
Introduction
Green buildings (also known as sustainable buildings, energy-efficient buildings, eco-buildings, or passive buildings) are the industry's answer to the requirement of sustainable development [1] which is one of the most important challenges of the contemporary economy [2]. The growing interest in green building issues is visible in several basic dimensions. First, attention should be paid to the development of the dedicated research in this field which is being undertaken by scientists from different parts of the world [3] representing various scientific disciplines, including economics, psychology, engineering, and management [4][5][6][7]. It is worth noting that these studies are conducted in the context of very different types of real estate, including residential [8] and commercial [9] as well as others [10]. Secondly, it is necessary to mention the creation and development of green building associations, supporting the creation and adaptation of multi-criteria assessment systems for the built environment in the context of compliance with the principles of sustainable development. In this context, an important role is played by green building rating systems, which are tools for evaluating buildings based on several objective criteria and clearly defined technical parameters. Among the most popular green certification systems are Leadership in Energy and Environmental Design (LEED) (launched in 1998) and Building Research Establishment Environmental Assessment Methodology (BREEAM) (launched in 1990), which are quite similar, analyze similar categories, and have a comparable cost. The differences between them are also primarily formal. However, it is worth mentioning that on the one hand, the number of certificates is growing and the numbers of Energies 2021, 14,1971 2 of 12 buildings certified worldwide have exponentially increased from just a few at the end of the 20th century to many thousands today [11]. This second aspect, related to the worldwide spread of green buildings, was the starting point of our empirical study.
Our research was conducted in the area of commercial real estate which includes office properties. In this case, green buildings provide many benefits for various stakeholders (for example, investors, tenants, employees, and other users of buildings), not only direct financial but also economic, marketing, and social [9]. Despite much research in this area, in our opinion, there is a need for more detailed studies. A research gap exists especially in the field of empirical research and compared to prior studies, which relied on crosssectional data, our research used a panel approach. In the paper we analyze the spatial diffusion of sustainable innovation across office markets in Europe and in doing so, we fit into the academic discussion regarding the extent, rate, and consequence of absorption of the sustainability paradigm in the real estate business and construction market [12]. The goal of the paper is to evaluate the impact of selected factors on the adoption of LEED certification in Europe. This research contributes to the ongoing debate about the adoption of green buildings in commercial property markets, based mainly on the U.S. [13][14][15][16][17][18].
The rest of the paper is organized as follows: The Background and Literature Review section offers a brief insight into the theoretical foundation of green building diffusion and provides an overview of research findings to date. The Material and Methods section discusses the indicator used to investigate the adoption of green technologies on office markets, as well as measures used to evaluate the adoption rate. The Results section discusses the dynamics of the LEED adoption rate and regression estimation results that allowed us to evaluate the role of selected factors affecting technology diffusion. We discuss the findings in light of the prior research and outline directions for future research in the last section, entitled Discussion and Conclusions.
Background and Literature Review
The framework for the study of diffusion and adaptation processes of the green building confirmed by eco-certificates lies in the well-established Diffusion of Innovation Theory, historically pioneered by Tard's early work (Les lois de l'imitation, 1890) [19,20], then intensively developed in the 1940s-1960s as a sociological study of the diffusion of agricultural innovations in the U.S. [21], and finally established and popularized by Roger's seminal work [22,23]. In Roger's view, "Diffusion of innovation is the process by which an innovation is communicated through certain channels over time among the members of a social system" [24] (p. 5). In this sense, diffusion is a social process based on communication in which knowledge about innovation and subjective evaluation of its benefits spreads through a community from earlier to later adopters. The diffusion of innovations takes place over time. Time is essential for the flow of the decision-making and the spread of knowledge as its basis [24] (p. 20). The time dimension is a delimiter of adapters' classes, distinguished by their innovativeness degree-innovators, early adopters, early majority, late majority, and laggards [25].
Along with the temporal dimension, geographical location and distance have also played a significant part in the diffusion of innovations. Diffusion is a spatio-temporal process whereby the characteristics of a place change as a result of previous events that occurred elsewhere. It, therefore, involves the spread of a particular phenomenon, in space and time, from limited origins [26] (p. 9). The groundwork for research on the spatial diffusion of innovation was laid by Swedish geographer Hägerstrand in his groundbreaking work [27], published in English in 1967 [28]. Hägerstrand saw diffusion as a geographic process resulting from interpersonal contact and information flow, influenced by time, the proximity of people (neighborhood effect), the ability to move innovation and information between areas, and the presence of physical and social barriers [29]. Hägerstrand's work sparked discussion of spatial diffusion and the development of research papers and analytical tools in this area, for instance [30][31][32][33][34][35]. Contemporary empirical research on the diffusion and adaptation of certified buildings across real estate markets, exploring the drivers of market penetration, draws not only on those general original theories but also on their subsequent extensions and adjustments to the type of innovations and industries. Koebel et al. [14] (p. 176) propose a general model for green building technology adaptation that includes seven multi-dimensional arrays drawn from diffusion and adaptation theory and previous research in this area. They address categories such as industry characteristics, market area characteristics, product characteristics, time, public policy, climate, and firm characteristics. Each contains a set of characteristics that can potentially be measured and incorporated into analytical models that examine green building diffusion and adaptation. In summarizing the research to date, Choi [36] highlights four general groups of factors that influence decisions in the sustainable building market. These are demand-side, supply-side, environmental condition, and public policy factors. Empirical research on the diffusion of certified office facilities overwhelmingly concerns the U.S. market and focuses on drivers of office market penetration and the spatial distribution of buildings at the level of major cities [36], core-based statistical areas (CBSA) [37], or metropolitan areas (MAs) [38].
Kok et al. [38] examined the spread of buildings certified for energy efficiency and sustainability (Energy Sar and LEED) across 48 U.S. metropolitan areas for 15 years (1995-2010). First, they find a relationship between the adoption of energy-efficient technology and building size, which is consistent with the general observation on technology diffusion that larger firms are more responsive to technological innovation. They also discovered that the diffusion curve for Energy Star-certificates matches the well-known S-shaped diffusion pattern of innovation. The purpose of Kok et al.'s study was to examine the impact of climatic, socioeconomic, real estate market, and policy variables on the dynamics of certified office space spread over time and space. They found that the adoption of green building innovations was faster in areas with higher pay and stronger income growth. This is consistent with previous studies, including, but not limited to, Cidell's research [39]. The second major factor affecting the diffusion of green buildings is the real estate market. Kok et al. identified that the size of the real estate market is important for diffusion processes-in markets with a higher supply of office space per employee, the adoption of certified buildings is quicker. In turn, higher vacancy rates negatively affect the growth of eco-labelled space. The third type of factor driving the growth of both-Energy Star and LEED-certified space has proven to be the price of commercial electricity.
Another influential paper examining the impact of climatic, socioeconomic, real estate market, and policy factors on the adoption of LEED-certified commercial buildings across 174 CBSA in the U.S. was that of Fuerst et al. [37]. They also found a significant positive impact of real estate market size on market penetration of LEED-certified buildings. Similar to the previously cited studies, areas with more affluent [38,39] and better-educated residents [39] have a higher proportion of LEED-certified buildings. When it comes to political drivers to stimulate green technology adoption, only mandatory requirements seem to matter. This conclusion follows the line of an earlier study by Choi [36] that focused on the impact of municipal policy tools on green building designation in central U.S. cities. Choi [36] also discovered that financial incentives do not affect green office building developments, however, he found a positive influence of regulatory policy and administrative incentives.
In addition to policy tools, a key factor for innovation diffusion processes is the social system. This follows from both Roger's innovation diffusion theory and the concept of spatial diffusion. As Broun et al. [40] point out, social attitudes toward environmental problems and green solutions are manifested in consumers' willingness to pay for green products, which influences actors' supply-side decisions and motivates the implementation of socially responsible practices. Besides, social trends also influence politicians and their tools for sustainable development and adaptation of green technologies. To address the societal influence on green building diffusion, Braun et al. [40] introduced the Green Sentiment Index, which reflects the public's environmental awareness in various areas of Energies 2021, 14,1971 4 of 12 the U.S., into the analysis. They found a significant positive social impact on the adoption of LEED-certified properties in both public, commercial, and office buildings.
All of the cited works on the diffusion and adaptation of green office buildings are intra-urban analyses within the United States. Although green development is also of great importance to Europe and there is a growing body of research in this area, so far there are just a few cross-country studies investigating the diffusion of green technologies in housing markets [41,42]. Research on the penetration of green office buildings into European markets has so far been neglected. Therefore, we are convinced that our study, at least to some extent, narrows this gap by providing insights into the factors determining the varying degree of green building adoption in major European cities.
Materials and Methods
As a proxy for the adoption of green innovation on the office market in Europe we used the data on LEED-registered office projects. LEED is a multicriteria building assessment system established in 1998 by the United States Green Building Council (USGBC). It is widely considered as the global leader in green building assessment ( Figure 1).
implementation of socially responsible practices. Besides, social trends also influence politicians and their tools for sustainable development and adaptation of green technologies. To address the societal influence on green building diffusion, Braun et al. [40] introduced the Green Sentiment Index, which reflects the public's environmental awareness in various areas of the U.S., into the analysis. They found a significant positive social impact on the adoption of LEED-certified properties in both public, commercial, and office buildings.
All of the cited works on the diffusion and adaptation of green office buildings are intra-urban analyses within the United States. Although green development is also of great importance to Europe and there is a growing body of research in this area, so far there are just a few cross-country studies investigating the diffusion of green technologies in housing markets [41,42]. Research on the penetration of green office buildings into European markets has so far been neglected. Therefore, we are convinced that our study, at least to some extent, narrows this gap by providing insights into the factors determining the varying degree of green building adoption in major European cities.
Materials and Methods
As a proxy for the adoption of green innovation on the office market in Europe we used the data on LEED-registered office projects. LEED is a multicriteria building assessment system established in 1998 by the United States Green Building Council (USGBC). It is widely considered as the global leader in green building assessment ( Figure 1). Data reveals that it has a significant competitive position in Europe, where it ranks second amongst the various certification systems (Figures 2 and 3). The most popular green building labelling system in Europe is the Building Research Establishment Environmental Assessment Methodology (BREEAM) created in 1990 in the UK. Other important green building European certification systems are Deutsche Gesellschaft für Nachhaltiges Bauen (DGNB) created in 2007 by Deutsche Gesellschaft für Nachhaltiges Data reveals that it has a significant competitive position in Europe, where it ranks second amongst the various certification systems (Figures 2 and 3). The most popular green Bauen e.V and Haute Qualité Environnementale (HQE) created in 1992 by Association pour la Haute Qualité Environnementale (ASSOHQE). Both of these has gained substantial popularity outside the domestic market-Germany and France, respectively. In the paper, we analyzed the adoption of LEED green building certification for one basic reason. Unlike other major certification schemes present in Europe (i.e., BREEAM, HQE, and DGNB) LEED has been created not in one of the European countries but in the U.S. We believe that using LEED in the empirical part of the paper provides a good illustration of the adoption of green innovation on new commercial property markets outside the country of origin. In the paper, we analyzed the adoption of LEED green building certification for one basic reason. Unlike other major certification schemes present in Europe (i.e., BREEAM, HQE, and DGNB) LEED has been created not in one of the European countries but in the U.S. We believe that using LEED in the empirical part of the paper provides a good illustration of the adoption of green innovation on new commercial property markets outside the country of origin.
We modelled green innovation diffusion, investigating the adoption rate across office markets in European cities using the fraction of LEED-registered office space as a dependent variable). We applied a simple measure of adoption of green technologies in the built environment, similar to Kok et al. [38]-the share of green buildings' area in the total building area in question. The dependent variable F i (the fraction of LEED-registered office space in city i), is given by the following equation (Equation (1)): where z i is the area of office space registered for LEED (in m 2 ), and x i is a total office stock (m 2 ). We modelled green innovation diffusion, investigating the adoption rate across office markets in European cities using the fraction of LEED-registered office space as a dependent variable). We applied a simple measure of adoption of green technologies in the built environment, similar to Kok et al. [38]-the share of green buildings' area in the total building area in question. The dependent variable Fi (the fraction of LEED-registered office space in city i), is given by the following equation (Equation (1)): where zi is the area of office space registered for LEED (in m 2 ), and xi is a total office stock (m 2 ). Aside from the fraction of LEED-registered (or certified) office space, other technology adoption measures have been applied in the literature [37,38]. Notably, Fuerst et al. [37] argue that a fraction indicator may lead to biased adoption estimates and opt for a variant of the spatial Gini coefficient. The formula is based on a proportion of LEED Aside from the fraction of LEED-registered (or certified) office space, other technology adoption measures have been applied in the literature [37,38]. Notably, Fuerst et al. [37] argue that a fraction indicator may lead to biased adoption estimates and opt for a variant of the spatial Gini coefficient. The formula is based on a proportion of LEED space in a given area normalized by the overall sustainable space. The G index [37] can be calculated according to the following formula (Equation (2)): where z i , and x i are as in Equation (1), Z is the sum of LEED-certified office space in all cities, and X is a total office stock in all cities. Nonetheless, the G index indicator is not feasible in our research, as we focus on selected cities located in different countries.
In the research, we combined the data on LEED office buildings with the information on office stock to calculate the fraction of LEED-registered office space in a given year. We monitored the changes in the fraction of LEED space in 14 cities in Europe from 2008 to 2018 (11 years). To understand the green building diffusion process, using this balance where zi, and xi are as in Equation (1), Z is the sum of LEED-certified office space in all cities, and X is a total office stock in all cities.
Nonetheless, the G index indicator is not feasible in our research, as we focus on selected cities located in different countries.
In the research, we combined the data on LEED office buildings with the information on office stock to calculate the fraction of LEED-registered office space in a given year. We monitored the changes in the fraction of LEED space in 14 cities in Europe from 2008 to 2018 (11 years). To understand the green building diffusion process, using this balance panel setting we evaluated the influence of selected factors on the changes in the adoption rate ( Figure 4). The data on LEED-registered projects came from the LEED Projects Directory administered by USGBC (https://www.usgbc.org/projects, accessed on 20 December 2020). The data on office stock (sto) and vacancy rate (vr) were collected from the Cushman & Wakefield market reports. Additionally, we used data on U.S. direct investment position abroad (usfdi) as a proxy of the relative activity of U.S. companies in the given country. We hypothesized that the strong presence of U.S. companies will foster the adoption of U.S.-originated green building certification. We also suspected that the adoption of green innovations may be faster in a more green and innovative environment. To account for that we used a fraction of citizens satisfied with green spaces (gre) and patent applications to the European Patent Office (pat) as proxy ( Table 1).
The analysis of the adoption of green building technologies on major office markets in Europe is presented in the following section. The data on LEED-registered projects came from the LEED Projects Directory administered by USGBC (https://www.usgbc.org/projects, accessed on 20 December 2020). The data on office stock (sto) and vacancy rate (vr) were collected from the Cushman & Wakefield market reports. Additionally, we used data on U.S. direct investment position abroad (usfdi) as a proxy of the relative activity of U.S. companies in the given country. We hypothesized that the strong presence of U.S. companies will foster the adoption of U.S.-originated green building certification. We also suspected that the adoption of green innovations may be faster in a more green and innovative environment. To account for that we used a fraction of citizens satisfied with green spaces (gre) and patent applications to the European Patent Office (pat) as proxy (Table 1). The analysis of the adoption of green building technologies on major office markets in Europe is presented in the following section.
Results
Simple exploratory analysis (see Figures 2 and 3) indicates that the LEED certification scheme has not been equally successful in European countries. We observed that there are significant differences in the usage of LEED green building labels between European countries-some of them influenced by the presence of domestic green building certification systems (BREEAM in the UK, DGNB in Germany, and HQE in France). Further analysis revealed divergent pathways of LEED adoption in selected European cities (see Figure 5). Among the cities quickly adopting LEED are Frankfurt, Warsaw, Stockholm, and Dublin, where the fraction of LEED-registered space increased significantly during the study period (2008-2018) The adoption was not as fast and smooth in Milan, Munich, Madrid, or Amsterdam, where the fraction of LEED space rose steadily, but less dynamically. Finally, the LEED adoption process was considerably slower in Brussels or Copenhagen ( Figure 5). usfdi U.S. direct investment position abroad on an historical-cost basis (country level) The U.S. Bureau of Economic Analysis.
gre The fraction of citizens very satisfied with green spaces such as public parks or gardens in a given city Eurostat pat Patent applications to the European Patent Office (EPO) (country level) Eurostat
Results
Simple exploratory analysis (see Figures 2 and 3) indicates that the LEED certification scheme has not been equally successful in European countries. We observed that there are significant differences in the usage of LEED green building labels between European countries-some of them influenced by the presence of domestic green building certification systems (BREEAM in the UK, DGNB in Germany, and HQE in France). Further analysis revealed divergent pathways of LEED adoption in selected European cities (see Figure 5). Among the cities quickly adopting LEED are Frankfurt, Warsaw, Stockholm, and Dublin, where the fraction of LEED-registered space increased significantly during the study period (2008-2018) The adoption was not as fast and smooth in Milan, Munich, Madrid, or Amsterdam, where the fraction of LEED space rose steadily, but less dynamically. Finally, the LEED adoption process was considerably slower in Brussels or Copenhagen ( Figure 5). The differences between the European cities in the adoption of LEED certification (or green building adoption in general for that matter) may be related to various factors. The diffusion of green technologies and the adoption of green building certifications may be driven by environmental policies or regulations, climate and weather conditions, or salient socioeconomic conditions that vary between locations and change over time.
Using a standard dynamic panel setting (14 office markets observed over 11 years) we investigated the impact of factors on the LEED building adoption rate. We evaluated how selected economic and environmental indicators affect the adoption of LEED certification using the Arellano and Bond [43] generalized method-of-moments (GMM) estimator. The results of the estimation are presented in Table 2. The dependent variable is the fraction of LEED-registered office space. The dynamic panel model used in the research allows us to account for dynamic adjustments in the adoption of green technologies within selected office markets, by adding the lagged dependent variable as a regressor in an econometric model. The coefficient for the lagged dependent variable is positive (0.77) and statistically significant. We observed a positive (0.0015) and significant impact of the fraction of citizens satisfied with green spaces in a given city and LEED adoption rate (measured as a fraction of LEED-registered office stock). This particular result suggests that the adoption of green building certification is positively linked with environmental controls in the study. One surprising result is that U.S. Direct Investment abroad (usfdi) did not influence the fraction of LEED-registered space. We hypothesized that significant U.S. investment, along with the presence of U.S. companies, could facilitate the diffusion of LEED certification, which is a domestic and default green building certificate in the United States. It was not the case.
Contrary to prior research based on U.S. data we did not observe a positive impact of the size of the office market (measured as office stock) on the adoption of LEED certification. A positive coefficient would suggest that adoption is faster and stronger in major office markets (first-tier). It was not the case in our sample. The coefficient is not statistically significant. We did not observe a significant relationship between the vacancy rate and adoption rate.
We also observed a positive impact of the number of patent applications to the European Patent Office (EPO) in a given country on the fraction of LEED-registered office space in a city (0.0005556). This may provide limited support for the notion that the overall innovativeness of the economy translates into the adoption of green innovation in the real estate industry.
Discussion and Conclusions
Using empirical data on the LEED certification scheme in the European office market context we found that citizens' level of satisfaction with green spaces (i.e., public parks or gardens), used as a proxy for overall city greenness, was positively linked with the green building adoption indicator. The estimates suggest that the LEED adoption rate in selected European cities was also positively linked with the overall level of innovation in the economy (patent applications to the EPO). We did not observe the impact of the U.S. Direct Investment on the adoption of LEED certification schemes. Nonetheless, the links between the presence of U.S. companies in given cities and office market willingness to adopt U.S. green building certification schemes needs to be explored in future-preferably using city-level variables.
The set of explanatory variables differed significantly from prior U.S.-based studies by Fuerst et al. [37] and Kok [38]. The before-mentioned studies explored the role of climate zone, ideology/political variables, and environmental policy incentives. Some of these variables either did not seem well-suited into the European context (republican vs. democratic) or were not feasible from a data-gathering perspective (obtaining comparable data from various European countries is far more complicated than in the U.S.). Compared to those studies, we explored the role of innovativeness and the role of the U.S. FDI in a given country on the adoption of USGBC LEED certification. None of these issues had been investigated before. Contrary to some of the prior studies, which relied on cross-sectional data [37], our research used a panel approach to investigate the changes in green building adoption rates in selected European cities. There are several limitations and natural extensions of this research. One obvious limitation stems from the fact that we used the fraction of LEED-registered space as a proxy for green building diffusion. That particular approach results in two fundamental problems. Firstly, as discussed in Kok et al.'s [38] seminal paper, one of the weaknesses of the approach lies in the fact that certification is a voluntary procedure (some green buildings are not certified, based on the owners' decision). Secondly, the European context is far more challenging than the American, due to fierce competition from domestic certification systems (BREEAM in the UK, DGNB in Germany and Austria, and HQE in France). In that respect, the adoption of LEED certification is hampered by strong competition. As a consequence, the fraction of LEED-certified buildings will not represent the true level of green building diffusion in a given office market. On the other hand, this particular finding has some implications for those interested in the promotion of LEED green building certification in Europe. Adoption of LEED certification seemed to be significantly faster in countries without domestic competitors. Therefore, focusing on those markets could result in strengthening the competitive position of LEED certification compared to its European counterparts.
The results of the empirical investigation described in this paper suggest several directions for further research. A natural extension of the study, albeit challenging from the data collection perspective, would be combining major certification schemes (LEED, DGNB, BREEAM, and HQE) when calculating the overall green space in given cities. Additionally, in the paper, we focused on major office markets in selected countries in Europe. The follow-up study should extend the sample size, and include smaller regional markets, preferably using hierarchical country-level controls to account for differences in the institutional framework. The latter analytical approach would allow some econometric problems of our research to be mitigated. The results are based on a relatively small sample (only 14 compared to 48 in Kok et al.'s U.S. study [37] making the Arellano-Bond [43] estimator problematic (it performs best in panels with small T and large N). This problem will be mitigated provided data on smaller regional office markets are used in future studies. Moreover, using green office building data for smaller European cities would allow exploration of how green technologies are adopted in second-tier and third-tier office markets. Data Availability Statement: Publicly available datasets were analyzed in this study. This data can be found here: https://www.usgbc.org/projects (accessed on 1 April 2021); https://ec.europa. eu/eurostat/home (accessed on 1 April 2021); https://www.bea.gov/international/di1usdbal; https://www.cushmanwakefield.com/en (accessed on 1 April 2021).
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2012-05-14T00:00:00.000
|
14359739
|
{
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2012/581039.pdf",
"pdf_hash": "a6c3195127728e4e2fbe6f4d78d1868a26f44d97",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46446",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "1edbd9f6987bd673bbfaaf05fbd333b6a4e6a949",
"year": 2012
}
|
pes2o/s2orc
|
Biofuel Manufacturing from Woody Biomass: Effects of Sieve Size Used in Biomass Size Reduction
Size reduction is the first step for manufacturing biofuels from woody biomass. It is usually performed using milling machines and the particle size is controlled by the size of the sieve installed on a milling machine. There are reported studies about the effects of sieve size on energy consumption in milling of woody biomass. These studies show that energy consumption increased dramatically as sieve size became smaller. However, in these studies, the sugar yield (proportional to biofuel yield) in hydrolysis of the milled woody biomass was not measured. The lack of comprehensive studies about the effects of sieve size on energy consumption in biomass milling and sugar yield in hydrolysis process makes it difficult to decide which sieve size should be selected in order to minimize the energy consumption in size reduction and maximize the sugar yield in hydrolysis. The purpose of this paper is to fill this gap in the literature. In this paper, knife milling of poplar wood was conducted using sieves of three sizes (1, 2, and 4 mm). Results show that, as sieve size increased, energy consumption in knife milling decreased and sugar yield in hydrolysis increased in the tested range of particle sizes.
Introduction
The transportation sector of the United States accounts for over 70% of the nation's total petroleum consumption, and 57% of the petroleum is imported [1]. In addition, use of petroleum-based fuels contributes to accumulation of greenhouse gases (GHG) in the atmosphere. Due to concerns of energy security and GHG emissions, it becomes crucial to develop domestic sustainable alternatives to petroleum-based transportation fuels [2].
Biofuels produced from cellulosic biomass (herbaceous, woody, and generally inedible portions of plant matter) are a sustainable alternative to petroleum-based fuels. The United States has the resource to produce over 1 billion dry tons of biomass with more than 80% of cellulosic biomass including about 320 million dry tons of woody biomass annually [5,6]. This amount of biomass is sufficient to produce 90 billion gallons of liquid fuels that can replace about 30% of the nation's current annual consumption of petroleum-based transportation fuels [6]. In contrast to grain-based biofuels, cellulosic biofuels do not compete for the limited agricultural land with food or feed production [7]. Figure 1 shows the major processes of converting woody biomass to ethanol (the most common form of biofuels). Size reduction reduces the particle size of woody biomass. Pretreatment helps to make cellulose in the biomass more accessible to enzymes during hydrolysis. Hydrolysis depolymerizes cellulose into its component sugars (glucose). Afterwards, fermentation converts glucose into ethanol [3].
Size reduction of woody biomass is necessary because large-size woody biomass cannot be converted to biofuels efficiently with the current conversion technologies [8][9][10]. Size reduction of woody biomass usually involves two steps. The first step is wood chipping [11]. Machines available for wood chipping include disk, drum, and V-drum chippers [12][13][14]. Figure 2 illustrates a disk chipper. Straight knives are mounted on a flywheel that revolves at a speed ranging from 400 to 1000 revolutions per minute (rpm). A wood log is fed to the disk chipper. Wood chips produced by wood chipping usually have sizes ranging from 5 to 50 mm [4]. Energy consumption of this step is typically about 0.05 Wh/g [15].
The second step is biomass milling to further reduce the wood chips into small particles. This step is usually conducted on knife mills [16] or hammer mills [17][18][19]. Wood particles produced by biomass milling usually have sizes ranging from 0.1 to 10 mm [19]. Energy consumption of this step ranged from 0.15 to 0.85 Wh/g [15,20,21].
Sieves are installed on knife mills and hammer mills to control the size of wood particles. During biomass milling, wood particles that are smaller than the sieve size (the size of the openings on a sieve) will pass through the sieve; those larger than the sieve size will be recirculated and milled further. In this study, sieves and sieve size are reserved to describe the sieves installed on knife mills or hammer mills.
There are reported studies about the effects of sieve size on energy consumption in woody biomass milling using 10 mm knife mills or hammer mills. A consistent observation was that energy consumption increased dramatically as sieve size became smaller [22][23][24]. However, these reports did not present sugar yield (proportional to ethanol yield) results using the wood particles produced by biomass milling. It was reported that woody biomass with smaller particle size had higher sugar yield [25][26][27][28]. However, particle size in these reported studies was defined differently from the sieve size in this paper. In these studies, wood particles produced by knife mills or hammer mills using a certain sieve size were separated into several size ranges by the screening method. The term particle size was actually the particle size range determined by the sizes of the openings on the screens. In this paper, the size of the openings on the screen is called screen size. Moreover, previously reported studies did not present energy consumption data for the biomass milling process used to produce the wood particles from which the sugar yield measurements were performed.
The lack of comprehensive studies about the effects of sieve size on energy consumption in size reduction (biomass milling) and sugar yield in hydrolysis makes it difficult to decide which sieve size should be selected in order to minimize the energy consumption in size reduction and maximize the sugar yield in hydrolysis. The purpose of this study is to fill this gap in the literature by studying the effects of sieve size on energy consumption in size reduction and sugar yield in hydrolysis simultaneously.
Biomass Material Preparation.
Poplar wood chips were purchased from Petco Animal Supplies, Inc. (Manhattan, KS, USA). Since the purchased wood chips had a wide distribution in size, the wood chips were separated into three groups using two screens with screen size of 5 and 12.5 mm, respectively. Large chips are those that did not pass through the 12.5 mm screen. Small chips are those that passed through the 5 mm screen. Medium chips are those that passed through the 12.5 mm screen but not the 5 mm screen. Examples of large, medium, and small wood chips are shown in Figure 3. Only the medium wood chips were used in this study.
The moisture content of the wood chips (as purchased) was 1.2%, measured by following the ASAE Standard S358.2 [29]. To adjust the moisture content of wood chips to a desired level, distilled water was added (by spraying evenly) to the wood chips. To achieve wood chips of 10% and 18% moisture content, 96 and 233 mL distilled water was added per 1000 g of original wood chips, respectively. After moisture content adjustment, the wood chips were placed in the sealed Ziploc bags and stored in a refrigerator at 4 • C for at least 72 hours before knife milling.
Experimental Setup and Procedure for Knife
Milling. The experimental setup for knife milling of wood chips is illustrated in Figure 4. A Retsch knife mill (model no. SM 2000, Retsch GmbH, Haan, Germany) was used. It was equipped with a three-phase 1.5 kW electric motor. The rotation speed of the motor was 1720 rpm. Figure 5 shows the milling chamber of the knife mill. Three knives (95 mm long and 35 mm wide) were mounted on the rotor inside the milling chamber.
Four cutting bars were mounted on the inside wall of the milling chamber. Wood chips was cut into particles between the knives and the cutting bars. The gap between a knife and a cutting bar was 3 mm. A sieve (145 mm long and 98 mm wide) was mounted at the bottom of the milling chamber. Sieves with three sieve sizes (4, 2, and 1 mm, resp.), as shown in Figure 6, were used in this study.
Sieve size = 4 mm Sieve size = 2 mm Sieve size = 1 mm Sieve sizes of 1 and 4 mm were selected because they were the minimum and maximum sieve size, respectively, that could be practically investigated in this study. As described in Section 2.1, the wood chips prior to milling had a range of 5 to 12.5 mm. If any available sieve size larger than 4 mm was used, some of the wood chips would fall through the sieve without being cut. Furthermore, based on previous experience, if any available sieve size smaller than 1 mm (the next one was 0.5 mm) was used, some of the sieve openings would be blocked by milled particles, causing significant increase in milling time and energy consumption.
At the beginning of each test, the knife mill was run for 10 seconds before loading any wood chips to avoid the current spike (this would happen if the knife mill started with wood chips already in the milling chamber). Then, 50 g of wood chips were loaded into the knife mill. This amount of wood chips was enough to keep the milling chamber approximately full (in volume). During knife milling, more wood chips were loaded into the milling chamber using a scoop as shown in Figure 7. The amount of wood chips loaded by the scoop at each time was 5 ± 1 g. These additional wood chips were loaded at a rate that would keep the milling chamber approximately full (in volume) but without causing over loading.
In each test, the total amount of wood chips loaded into the milling chamber was 200 g. The milling time was different under different conditions. When a smaller sieve size 4 Journal of Biomedicine and Biotechnology
Sugar analysis Pretreatment
Hydrolysis Biomass washing was used, it took a longer time to mill the same amount of wood chips. After each test, wood particles in the receiving container were collected, weighed, and kept in the sealed Ziploc bags. The amount of wood particles collected by the receiving container in each test was less than 200 g, because some wood chips (or particles) did not pass through the sieve yet when the knife mill was turned off. Before starting the next test, the milling chamber was opened and any remaining wood chips were cleaned using a brush. To allow the motor to cool down, there was a waiting period (at least five minutes) between two successive tests. Experimental conditions are listed in Table 1.
Energy Consumption.
In this study, energy consumption is the electricity consumed by the electric motor of the knife mill. As shown in Figure 4, electric current to the motor was measured using a Fluke 189 multimeter and a Fluke 200 AC current clamp (Fluke Corp., Everett, WA, USA). Current data were collected using Fluke View Forms software. The sampling rate was 2 readings per second. Data acquisition began after the first 50 g of wood chips was loaded into the milling chamber and stopped when additional 150 g of wood chips was all loaded into the chamber. The knife mill was turned off right after data acquisition stopped. The software recorded the average current (I AVE ) in each test. The voltage (V LN ) was 208 V. The energy consumed during each test (that lasted for t seconds) (E t ) was calculated using the following [31]: Dividing E t by the weight (w) of the wood particles collected from the receiving container after the test gives energy consumption (E) per unit weight, as expressed in (2): 3.2. Sugar Yield. Sugar yield in hydrolysis is the amount of glucose produced from hydrolyzing cellulose using enzymes. It was expressed as the concentration of glucose (mg/mL) in the measurement sample. Figure 8 shows the four steps in sugar yield measurement. In this study, 10 g of biomass and 200 mL of 2% sulfuric acid were loaded in the 600 mL vessel of a Parr pressure reactor (Model 4760A, Parr Instrument Co., Moline, IL, USA). Pretreatment time was 30 minutes, and pretreatment temperature was 140 • C. After pretreatment, biomass was washed with hot distilled water using a centrifugal (Model PR-7000 M, International Equipment Co., Needham, MA, USA). The purpose of biomass washing was to remove the acid residues and inhibitors (substances that would bind to enzymes and decrease their activity to depolymerize cellulose to glucose [32]) formed during pretreatment. The rotation speed of the centrifugal was 4500 rpm. Each biomass sample was washed three times, and each time lasted for 15 minutes.
Accellerase 1500 (Danisco US Inc., Rochester, NY, USA) enzyme complex was used for hydrolysis of wood particles into sugars in solution with sodium acetate buffer (50 mM, pH 4.8) and 0.02% (w/v) sodium azide to prevent the microbial growth during hydrolysis. Enzymatic hydrolysis was carried out in 125 mL flasks with 50 mL of slurry in the water bath shaker (Model C76, New Brunswick Scientific, Edison, NJ, USA) with agitation speed of 110 rpm at 50 • C for 72 hours. The dry mass content of the hydrolysis slurries was 5% (w/v) and the enzyme loading was 1 mL/g of dry biomass. After enzymatic hydrolysis, samples were ready for sugar analysis.
Sugar analysis was done using an HPLC (Shimadzu Corp., Kyoto, Japan) equipped with an RPM-monosaccharide column (300 × 7.8 mm; Phenomenex, Torrance, CA, USA) and a refractive index detector (RID-10A, Shimadzu, Kyoto, Japan). The mobile phase was 0.6 mL/min of doubledistilled water, and oven temperature was 80 • C. HPLC can identify and quantify individual components of a liquid mixture [33].
Particle Size Distribution.
Wood particles produced by knife milling were not uniform in their size. Particle size distribution was determined using a screen shaker (Model RO-TAP 8 RX-29, W.S. Tyler Industrial Group, Mentor, OH, USA) as illustrated in Figure 9. A stack of screens from the bottom to the top were arranged from the smallest to the largest in screen size. The screen sizes used were 0.1, 0.2, 0.4, 0.6, 1.2, 2.4, 5.6, and 6.3 mm. A pan (no openings) was put at the bottom of these screens. 100 g of wood particles was loaded onto the top screen.
The screen shaker provided circular motion to the stack of screens at the rate of 278 rpm. Simultaneously, the tapping hammer hit the top of the stack at the frequency of 150 times per minute. The screen shaker was on for 5 minutes. Afterwards, wood particles retained on each screen were collected and weighed. The percentage of the wood particles in each of the nine particle size ranges (<0.1, 0.1-0.2, 0.2-0.4, 0.4-0.6, 0.6-1.2, 1.2-2.4, 2.4-5.6, 5.6-6.3, and >6.3 mm) was translated to particle size distribution [34]. Figure 10 shows energy consumption in knife milling of wood chips. Energy consumption decreased dramatically with an increase of sieve size. For instance, when knife milling of wood chips with moisture content of 1.2%, energy consumption was as high as 1.38 Wh/g for 1 mm sieve size and only 0.16 Wh/g for 4 mm sieve size. The same trend was observed for the other two levels of moisture content.
Energy Consumption in Knife Milling.
In the literature, there are no reports about the effects of sieve size on energy consumption in knife milling of poplar wood chips. Phanphanich and Mani [35] used a knife mill (of the same model as the one used in this study) to reduce the size of pine wood chips (including chips, branches, barks, leaves, and small particles). The moisture content of the pine wood chips was 10%. Only one sieve size (1.5 mm) was used in their study. Energy consumption in knife milling was 0.25 Wh/g. Miao et al. [23] measured energy consumption in hammer milling of willow wood chips. The hammer mill was manufactured by Sears Roebuck and Co. (Hoffman Estates, IL, USA). The size of the willow wood chips (three dimensions) was 13-50, 13-76, and 5-25 mm. The moisture content was 7-10%. Energy consumption in hammer milling using the 1, 2, and 4 mm sieves was 1.55, 0.66, and 0.39 Wh/g, respectively.
Moisture content of poplar wood chips also affected energy consumption in knife milling. As shown in Figure 11, energy consumption in knife milling increased when moisture content increased from 1.2% to 10% and decreased slightly when moisture content increased from 10% to 18%.
The literature does not have any reports about the effects of moisture content on energy consumption in knife milling of wood chips using the knife mill of the same model as the one used in this study. However, there are reports on these effects in knife milling of herbaceous biomass (such as miscanthus, switchgrass, and wheat straw). Miao et al. [23] investigated energy consumption in knife milling of miscanthus and switchgrass using the same model of knife mill. It was found that when moisture content increased from 7-10% to 15%, energy consumption in knife milling increased significantly. The same trend was also found in size reduction of wheat straw, barley straw, corn stover, and switchgrass using a hammer mill [36]. According to Mani et al. [36], an increase in moisture content of cellulosic biomass would increase the shear strength of the biomass; therefore, more energy was consumed in milling of cellulosic biomass.
Sugar Yield.
Materials used for sugar yield evaluation were the particles produced by knife milling of wood chips with the moisture content of 1.2%. For each sieve size, there were two independent samples processed for sugar yield evaluation. Figure 12 shows the sugar yield results. The results showed that wood particles processed using the 4 mm sieve had the highest sugar yield while sugar yields of wood particles processed using the 1 and 2 mm sieves were approximately the same.
There are reported investigations on the effects of sieve size on sugar yield. Zhang et al.'s results [30] are shown in Figure 13. Wheat straw particles milled using the 2 mm sieve had higher sugar yield than those milled using the 1 mm sieve. The knife mill used was the same model as the one in this paper. Similar results were reported by Theerarattananoon et al. [37]. In their work, wheat straw, corn stover, and big bluestem were milled using a hammer mill (Model 18-7-300, Schuttle-Buffalo Hammermill, Buffalo, NY, USA) using 3.2 and 6.5 mm sieves. For these three types of cellulosic materials, biomass particles milled using the 6.5 mm sieve yielded more sugar than those milled using the 3.2 mm sieve ( Figure 14). Both these reported studies involved a pelleting process (agglomerating biomass particles produced by milling into pellets) before sugar yield. Figures 15 and 16 show the effects of woody biomass particle size on sugar yield reported in the literature. In Dasari and Berson's study [27], red oak saw dust was screened into four particle size ranges. As shown in Figure 15, particles in the size range of 0.03-0.08 mm yielded 80% more sugar than [26], spruce wood chips were hammer milled in three successive steps using sieve sizes of 12.7, 4.8, and 0.8 mm, respectively. After hammer milling, particles were screened into four particle size ranges. As shown in Figure 16, particles in the size range of smaller than 0.32 mm yielded 1.6 times more sugar than those in the size range of larger than 1.27 mm. Figure 17 shows the wood particles produced by knife milling using the three different sieve sizes (4, 2, and 1 mm resp.). The particles produced using the same sieve did not have a uniform size. Their size distribution is shown in Figure 18. Similar distributions were reported by Himmel et al. [24]. In Himmel et al.'s study, poplar wood chips were processed by a knife mill (Mitts & Merrill Frömag Group, Harvard, IL, USA) using 1/16, 1/8, and 3/32 inch (1.59, 3.18, and 2.38 mm) sieves.
The results from this study and the studies conducted by Zhang et al. [30] and Theerarattananoon et al. [37] show that biomass particles produced with larger sieve size had higher sugar yield. However, results reported by Dasari and Berson [27] and Zhu et al. [26] show that wood particles in the smaller size range had higher sugar yield. At this point in time, the authors could not explain such inconsistence.
However, some differences in test conditions were noticed. In the studies reported by Dasari and Berson [27] and Zhu et al. [26], wood particles were from relatively narrow size ranges. In this work, wood particles were mixtures of particles that had a wide distribution in size. Further investigations will be carried out to study the effects of particle size distribution on woody biomass sugar yield.
Conclusions and Future Work
In this study, effects of sieve size on energy consumption in knife milling of poplar wood chips and sugar yield in hydrolysis were studied. The following conclusions are drawn. Energy consumption in knife milling increased dramatically as sieve size became smaller. Poplar wood particles processed by knife milling using the 4 mm sieve had higher sugar yield than those processed by knife milling using the 1 and 2 mm sieves.
Knife milling of wood chips using the 4 mm sieve consumed less energy in size reduction than using the 1 and 2 mm sieves. The wood particles knife milled using the 4 mm sieve had higher sugar yield in hydrolysis than those milled using the 1 and 2 mm sieves. This finding is very important when deciding what sieve size is to be used in knife milling of wood chips to minimize energy consumption in size reduction and maximize sugar productivity in hydrolysis. In future study, the authors will also use 0.25, 0.5, and 8 mm sieves to further investigate the effects of sieve size on energy consumption in size reduction and sugar yield in hydrolysis.
A hammer mill will be utilized to see if similar results can be obtained on different types of milling machines. More types of cellulosic materials will be tested to see if conclusions obtained in this study can be extended to different types of cellulosic biomass.
|
v3-fos-license
|
2022-09-28T15:07:38.060Z
|
2022-09-26T00:00:00.000
|
252558659
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/ecam/2022/5556067.pdf",
"pdf_hash": "d425fd8b3570fa20ce63c69479d3ee2d84520e1b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46447",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "1667646e53c6505175edba41bda610ee56b75f79",
"year": 2022
}
|
pes2o/s2orc
|
Dl-3-n-Butylphthalide (NBP) Mitigates Muscular Injury Induced by Limb Ischemia/Reperfusion in Mice through the HMGB1/TLR4/NF-κB Pathway
Objective Limb ischemia/reperfusion (I/R) injury is a clinical syndrome associated with severe damages to skeletal muscles and other fatal outcomes. Oxidative stress and inflammatory response play vital roles in the development of limb I/R injury. Existing evidence further indicates that Dl-3-n-butylphthalide (NBP) has anti-inflammatory and antioxidative properties. However, whether NBP can protect skeletal muscles from limb I/R injury and the mechanism in mediating the action of NBP treatment still remain to be investigated, which are the focuses of the current study. Methods The model of limb I/R injury was established and H&E staining was adopted to assess the pathological changes in skeletal muscles following limb I/R injury. Additionally, the W/D ratio of muscle tissue was also measured. ELISA and biochemical tests were carried out to measure the levels of inflammatory cytokines and oxidative stress in mouse models of limb I/R injury. Moreover, the levels of the HMGB1/TLR4/NF-κB pathway-related proteins were also determined using immunohistochemistry and immunoblotting. Results It was established that NBP treatment alleviated I/R-induced pathological changes in muscular tissue of mice, accompanied by lower W/D ratio of skeletal muscular tissue. Meanwhile, the limb I/R-induced inflammation and oxidative stress in skeletal muscles of mice were also inhibited by NBP. Mechanistic study indicated that the alleviatory effect of NBP was ascribed to inactivation of the HMGB1/TLR4/NF-κB pathway. Conclusions Our findings highlighted the potential of NBP as a novel strategy for limb I/R-driven muscle tissue damages by suppressing inflammatory response and oxidative stress via the HMGB1/TLR4/NF-κB pathway.
Introduction
Limb ischemia/reperfusion (I/R) injury is classified as a clinical syndrome with fatal outcomes for humans. Significant advancements in the understanding of limb I/R injury have shown that arterial embolism, trauma, blood clots, abdominal compartment syndrome, prolonged tourniquet application, and limb or flap reattachment serve as the primary causes of limb I/R injury [1]. Skeletal muscles are particularly susceptible to limb I/R injury; even moderate I/R injuries can precipitate permanent damage and necrosis of skeletal muscles, which could ultimately exert detrimental effects on limb functionality. Unfortunately, some patients must undergo amputation to save their lives in severe cases. Further adding to the plight, multiple organ dysfunction syndrome(MODS) is a common occurrence in patients with critical limb I/R injury, which represents a fatal condition for patients [2].
Pathologically, a plethora of mechanisms are known to be associated with skeletal muscle damage caused by limb I/R injury. However, inflammatory response and oxidative stress play critical roles in advancing skeletal muscle I/R injury [3,4]. Organs involved in I/R injury present with enhanced production of reactive oxygen species (ROS), whereas the overproduction of ROS is previously associated with diminished expression of antioxidative proteins related to stress, which further exacerbates I/R injury [4]. Typically, malondial-dehyde (MDA) and super-oxide dismutase (SOD) are commonly utilized as indicators of oxidative stress [5]. Moreover, injuries induced by ROS are known to release proinflammatory cytokines [6].
In the course of distinct cellular processes, including cell damage, cell death, and cytokine stimulation, an ubiquitous DNA-binding nuclear protein, known as high mobility group protein B1 (HMGB1), can be secreted into an extracellular region as a factor with effective proinflammatory features [7]. It should be noted that HMGB1 has been previously shown to enhance inflammatory responses in microvascular injury of endothelial cells via the release of proinflammatory cytokines in the blood of septic patients [8]. Moreover, there is evidence to suggest that HMGB1 proteins carry out their biological functions by binding with cell-surface receptors, specifically known as toll-like receptors (TLRs) [7]. TLRs are classified under pattern recognition receptors (PRRs), which could possess the ability to activate innate and adaptive immune responses [9]. Currently, approximately thirteen members of TLRs have been discovered and identified in mammals. Out of these, TLR4 is the first to be recognized for its ability to bind with lipopolysaccharide and produce proinflammatory cytokines; moreover, among all members of the TLRs, TLR4 has also been the most extensively characterized and widely expressed receptor [10]. Accumulating evidence has shown that TLR4 exhibits a significant role in both innate and adaptive immune responses. In addition, TLR4 is previously documented to mediate organ injuries in various I/R models [11]. Strikingly, the efforts of Oklu et al. have demonstrated that TLR4 exerts vital influence in the pathogenesis of limb I/R injury [12]. TLR4 triggered by limb I/R injury is further established to activate nuclear factor (NF)-κB via myeloid differentiation factor 88 (Myd88)-dependent pathway, resulting in release of proinflammatory cytokines, eventually leading to aggravation of tissue damage [13].
Dl-3-n-butylphthalide (NBP) is obtained from Apium graveolens Linn seeds, commonly known as Chinese celery, are widely adopted in clinical settings to treat ischemic stroke. NBP has numerous therapeutic benefits, which include antioxidant [14], antiapoptotic [15], and antiinflammatory properties [16]. Additionally, prior evidence indicates that the administration of NBP can alleviate cerebral I/R-induced brain injury and spinal cord injury through TLR4/NF-κB inhibition [17]. However, whether NBP has the ability to safeguard skeletal muscle from limb I/R injury, and the potential underlying mechanism that mediate the action of NBP treatment remain to be studied. Accordingly, the current study is aimed to elucidate protective effects of NBP on the skeletal muscle against limb I/R injury and to clarify the underlying signaling pathway that mediates the beneficial effects of NBP.
Mouse Model Generation of Femoral Artery I/R Injury.
e femoral artery and vein were exposed, and the supply of blood was blocked and a band was fitted around the left thigh to induce ischemia for 1.5 h. Afterward, the clamp together with the band was removed for reperfusion induction for 72 h prior to sampling.
Experimental Groups and Drug Treatment.
Following femoral artery I/R modeling, the mice were assigned into sham, I/R, and I/R + NBP (40 mg/kg; Bide Pharmatech Ltd., Shanghai, China) groups. e femoral artery and vein of the mice in the sham group were blocked for 1.5 h, and the mice were intraperitoneally injected with saline prior to reperfusion on the day of the surgery, followed by intraperitoneal saline administration once on a daily basis for two additional days prior to sampling. Meanwhile, the femoral artery and vein of the mice in the I/R group were blocked for 1.5 h, following which the mice were intraperitoneally injected with saline prior to reperfusion on the operation day, and the mice received intraperitoneal administration with saline once per day for two days before sampling. In the I/R + NBP group, the femoral artery and vein of mice were blocked for 1.5 h. Afterward, the mice received intraperitoneal administration with saline via injection containing 40 mg/kg NBP before reperfusion on the operation day and the drug treatment was given once per day for two days before sampling. Afterward, all the mice were euthanized by exsanguination, followed by tissue collection.
Histology and Immunohistochemistry.
Following 3 days of administration, pretibial muscle tissues were removed and paraffin-embedded. Subsequently, the sections (5 μm) were subjected to staining with H&E to assess the general histology and inflammation as previously described [4,6].
Immunohistochemistry was carried out to measure HMGB1 and TLR4 expression patterns in pretibial muscle tissue sections (5 μm). Initially, the sections were deparaffinized and rehydrated prior to treatment with 3% (v/v) H 2 O 2 in methanol, followed by blocking with BSA. e sections were probed with antibodies to HMGB1 (ab18256; 1 : 300; Abcam) and TLR4 (ab217274; 1 : 200; Abcam). Once the sections were washed, bound antibodies were recognized with biotin labeled secondary antibodies and an ABC kit. A light microscope was utilized to visualize the staining.
Wet/Dry (W/D) Weight Ratio of Muscle Tissue.
Pretibial muscle was excised from the left hind limb, it was instantly weighed and recorded as wet weight. Subsequently, the muscle was dehydrated and then weighed again as dry weight. W/D ratio � wet weight/dry weight [4,6].
Statistical
Analysis. SPSS 19.0 software was adopted for analysis of all results, which are presented as mean ± SD. One-way ANOVA and SNK-q test or the Dunnett's multiple comparison test was applied for multigroup comparison. p < 0.05 suggests statistical significance.
NBP Alleviated I/R Injury-Driven Skeletal Muscle
Damage. H&E staining data showed intact borders regularly arrayed without breaks, holes, and edema, which identified healthy fibers. Injured fibers were confirmed by edema and broken and fragmented fibers. I/R mice showed muscle fiber degeneration, dissolution, sarcoplasm, and inflammatory cell infiltration along with myoedema; whereas, healthy fibers were observed in the mice of the sham group. Meanwhile, NBP treatment reduced the inflammation degree of muscular tissue (Figure 1(a)). Consequently, NBP treatment alleviated the I/R-induced pathological alterations in muscular tissues (Figure 1(b), p < 0.05).
Furthermore, mice in the I/R group presented with a higher W/D ratio for the skeletal muscle tissue compared to mice in the sham group (Figure 1(c), p < 0.05). In contrast, the W/D ratio was lower in the NBP group than that in the I/R group (Figure 1(c), p < 0.05).
Moreover, elevated levels of LDH and CK-MB were found in the I/R group versus the sham group, suggestive of muscle damages along with pathological alterations following I/R injury. On the other hand, NBP treatment led to contrasting results (Figures 1(d) and 1(e), p < 0.05).
NBP Ameliorated Skeletal Muscle Inflammatory Response
in Mice with Limb I/R Injury. Additional assessment revealed higher levels of IL-1β, TNF-α, and IL-6 in the I/R group relative to those in the sham group. Conversely, NBP therapy reversed the increased inflammatory cytokine levels ( Figure 2, p < 0.05).
NBP Alleviated Oxidative Stress in the Skeletal Muscles of
Mice with Limb I/R Injury. Relative to the sham group, MDA production was increased in I/R group but it was lowered in the NBP group (Figure 3(a), p < 0.05). Moreover, lower SOD activity was observed in the I/R group when compared to those in the sham group. Meanwhile, the NBP group had higher SOD activity than the I/R group (Figure 3(b), p < 0.05).
NBP Inactivated the HMGB1/TLR4/NF-κB Pathway in
Mice with Limb I/R Injury. Furthermore, the impact of NBP treatment on HMGB1/TLR4/NF-κB in the muscular tissues of different groups of mice was investigated 72 h after treatment with immunohistochemistry and immunoblotting (Figure 4(a) and 4(b)).
While lower HMGB1 and TLR4 levels were expressed in the sham group, elevated HMGB1 and TLR4 levels were documented in the I/R group. Additionally, in contrast to the I/R group, the quantity of HMGB1 and TLR4 positive cells was diminished in the NBP group (Figure 4(a)).
As illustrated in Figure 4(b), the I/R group had elevated protein expressions of HMGB1, TLR4, Myd88, and extent of p65 phosphorylation compared to the sham group. However, these elevations were abolished in the NBP group relative to the I/R group, indicating that NBP treatment, for the most part, repressed the HMGB1/TLR4/NF-κB pathway activation in the muscular homogenate, adding to its defensive influence against limb I/R injury.
Discussion
Limb I/R injury-induced oxidative stress, which is accompanied by inflammatory response, contributes to profound muscular tissue dysfunction. e current study was performed with the goal to explore whether NBP lessened inflammation induced by limb I/R injury and mitigated tissue edema in skeletal muscles. Accordingly, our findings Evidence-Based Complementary and Alternative Medicine indicated that NBP ameliorated inflammatory responses and oxidative stress was induced by limb I/R injury. Moreover, we uncovered mechanistically that NBP inactivated HMGB1/TLR4/NF-κB in the skeletal muscle of limb I/R injury, highlighting that protective effects of NBP were attributed to repression of HMGB1/TLR4/NF-κB ( Figure 5).
Limb I/R injury-driven skeletal muscle damage is a clinical challenge that requires much attention [1][2][3]. Currently, several approaches are employed to treat limb I/R injury, which include the use of physical and chemical treatment regimens. In addition, hypothermia, ischemic preconditioning, ischemic postconditioning, controlled reperfusion, and light-emitting diode therapy possess the ability to alleviate limb I/R injury-driven skeletal damage [18][19][20]. Some medications, such as curcumin, dexamethasone, simvastatin, silibinin, cyclosporine A, and saline [21][22][23], are also known to be effective to reduce limb I/R injury-driven skeletal damage. e usage of such methods remains limited in the horrendous wound. In specific cases, especially in severe extremity injuries, surgery is warranted to prevent mortality hemorrhage to protect the function of major organs. More importantly, while the above mentioned procedures and pharmaceutical regimens have been demonstrated to be successful during research, none have been identified effective in the clinical settings. In lieu of the same, a pressing need exists to dissect out novel agents with anti-inflammatory and antioxidant characteristics that can be adopted for the treatment I/R injury-driven skeletal muscle damage.
Evidence-Based Complementary and Alternative Medicine
NBP is known to be clinically effective against ischemic stroke. e adoption of NBP in general clinical use is attributed to its broad range of characteristics, including the features of the antioxidation [14], antiapoptosis [15], and the ability to mitigate inflammation [16]. Of note, there are many evidences to suggest that NBP could mitigate brain edema induced by concussive head injury [24]. In addition, prior data have further demonstrated that NBP can guard against cerebral I/R injury-triggered edema development by breaking down the blood-brain barrier [25]. Herein, our findings unveiled that NBP alleviated the limb I/R injurydriven damage of skeletal muscle tissues in mice. Additional histological assessment illustrated fewer pathological changes in the presence of NBP. Moreover, our findings revealed that NBP could alleviate edema in the skeletal muscle of limb I/R injury. Furthermore, inflammatory responses are essential for the pathogenesis of skeletal muscle I/R injury characterized by infiltration of inflammatory cells [4]. Similarly, penetrating neutrophils deliver an assortment of proinflammatory cytokines; for instance, IL-1β, TNF-α, and IL-6 are capable of aggravating inflammatory responses [26]. In addition, numerous studies have indicated the ability of NBP to alleviate inflammation in cerebral I/R injury and other illnesses associated with inflammatory responses [27,28]. Much in accordance with the same, our findings illustrated that NBP could mitigate the degree of inflammatory reactions in the skeletal muscle tissues caused by limb I/R injury.
Oxidative stress, a consequence of imbalance between production and accumulation of ROS in cells, is remarkably imperative in the advancement of the limb I/R injury process [6]. Experimentation in our study indicated that NBP could significantly diminish oxidative stress of skeletal muscle triggered by limb I/R injury by lessening MDA production and augmenting SOD activity. In line with the current discovery, a prior study documented the antioxidative properties of NBP in the brain of patients with Alzheimer's disease [29]. Moreover, NBP was previously shown to relieve anxiety and depression-like behaviors through the restriction of oxidative stress [30]. All the aforementioned studies are indicative of the usage of NBP in the management of limb I/R injury-driven skeletal muscle damage due to its antioxidative properties.
Activation of the HMGB1/TLR4/NF-κB pathway has been previously observed under inflammatory responses or oxidative stress conditions [31]. Besides, the HMGB1/TLR4/ NF-κB pathway is likewise activated in various I/R models [32]. Herein, our findings demonstrated increased protein expressions of HMGB1, TLR4, Myd88, and extent of p65 phosphorylation in the I/R group, whereas the declines were documented in the NBP group. e efforts of Zhang et al. revealed that NBP treatment essentially enhanced cerebral I/R-triggered brain injury by restraining TLR4/NF-κBrelated inflammatory responses [33]. Additionally, He et al. showed that NBP decreased activation of BV2 cells, reduced the release of inflammatory cytokines, and further restrained the expression of TLR4/NF-κB in BV2 cells, consequently safeguarding against spinal cord injury. Nonetheless, none of these studies focused on the effects of NBP on inflammatory responses through HMGB1. Accordingly, our study is the first-of-its-kind to reveal that NBP treatment inhibited the protein expression of HMGB1, resulting in repressing I/R-induced muscular injury.
Conclusions
In summary, this study dissected out the effect of NBP on the limb I/R injury-driven skeletal muscle damages, and our findings suggested that NBP could protect the limbs against I/R injury by inhibiting inflammation responses and oxidative stress via the HMGB1/TLR4/NF-κB pathway. Additionally, our discoveries highlight the potential of NBP to serve as an effective strategy against I/R injury-driven skeletal muscle tissue damages.
Data Availability
All data generated or analyzed during this study are available from the corresponding author upon reasonable request.
Ethical Approval
All the experimental procedures conducted were approved by the Ethics Committee for Animal Use of Hebei Medical University.
Conflicts of Interest
e authors declare that they have no conflicts of interest..
Authors' Contributions
XG, YRZ, HHS, and JQW designed the study. HHS, JQW, KC, and LS performed the experiments. ML and JWZ analyzed the data. WB, FZ, ML, and JWZ advised on histological staining and analysis. HHS drafted and wrote the manuscript. YRZ and XG revised the manuscript critically for intellectual content. All authors gave intellectual input to the study and approved the final version of the manuscript.
|
v3-fos-license
|
2022-06-05T15:11:02.915Z
|
2022-06-03T00:00:00.000
|
249349161
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12649-022-01824-8.pdf",
"pdf_hash": "60ac4ffceb29defdf508f4241a1f66cc235eeb4a",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46448",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "992a5f53ae26f398ac522fbc8e8d81c8f47e0be0",
"year": 2022
}
|
pes2o/s2orc
|
Sequential Pretreatment of Wheat Straw: Liquid Hot Water Followed by Organosolv for the Production of Hemicellulosic Sugars, Lignin, and a Cellulose-Enriched Pulp
The complete valorization of the lignocellulosic fractions plays a fundamental role in biorefineries’ sustainability. One of the major challenges is finding technological configurations that allow using cellulose, hemicellulose, and lignin simultaneously. Cellulose has been extensively studied, yet, hemicellulose and lignin remain as platforms to be valorized. Sequential pretreatments have shown an opportunity to valorize the latter two components into sugar-rich and lignin-rich fractions. After the sequential pretreatment, a solid fraction enriched in cellulose could still be used for paper production. This work consisted of pretreating wheat straw with a sequential Liquid-Hot-Water/Organosolv, characterizing the respective hemicellulosic sugar and lignin extracts, and evaluating the final cellulose-enriched pulp for papermaking. Different pretreated pulp/cellulose pulp formulations were used for paper production as a proof-of-concept. Tensile strength and bursting pressure of the papers were measured. After pretreatment, the calculated solid composition was 70%wt cellulose, 26%wt hemicellulose, and 4%wt lignin, with extraction yields of 5.1%, 51.3%, and 89.9%, respectively. The tested pulp formulations showed similar tensile index and bursting index values at 10/90 (77.1 Nm/g, 3189 kPa) and 20/80 (63 Nm/g, 2419 kPa) %wt pretreated pulp/pulp when compared to the pulp (77 Nm/g, 4534 kPa). This proof-of-concept of the papermaking showed the potential of the LHW-OS pretreated wheat straw as a replacement for pure cellulose pulp and encourages studying other substitutes such as unbleached pulp or further treating the LHW-OS pretreated wheat straw. In addition, the hemicellulosic sugars and lignin extract can be valorized, indicating an option for developing a biorefinery concept.
Introduction
Valorizing the different biomass fractions plays a fundamental role in the sustainability of a biorefinery [1,2]. Among the different components of the lignocellulosic matrix, cellulose has been extensively studied as fiber for material applications, pulp for the paper industry, and enzymatic conversion into glucose for fermentation-based processes. Traditionally, hemicellulose and lignin were addressed as components without value to be removed to increase the accessibility to cellulose. Nonetheless, hemicellulose and lignin have been studied in the last years as platforms to be valorized into value-added products [3][4][5]. Hemicellulose valorization could be directed as a substrate for fermentation processes or to obtain furan-based components [6,7]. Lignin valorization can be directed to drug delivery systems, delivery of hydrophobic molecules, UV barriers in sunscreens, antibacterials, and coatings/paints, among others [8,9].
One major challenge remains in determining technological configurations of pretreatments, which allow the simultaneous valorization of the cellulose, hemicellulose, and lignin into value-added products and stress the importance of the pretreatment section to achieve such purpose [10]. Sequential pretreatment combinations have shown an opportunity to simultaneously deconstruct hemicellulose and lignin. Xia et al. (2020) evaluated Liquid Hot Water (LHW) followed by sodium carbonate-oxygen pretreatment to improve the reed enzymatic saccharification [11]. Neves et al. (2016) and Rocha et al. (2012) studied steam explosion followed by alkaline pretreatment of sugarcane bagasse [12,13]. Tian et al. (2019) combined LHW with mechanical extrusion from rigid hardwood [14]. Wang et al. (2012) combined a fungal treatment with LHW of white poplar [15]. Multiple authors have studied the combination of LHW and Organosolv (OS) to produce sugars from the hemicellulose and hydrolyze the lignin from wheat straw [3,16,17]. Other authors have evaluated the same configuration, LHW followed by OS, for different raw materials such as hazelnut shells [18] and corncobs [19].
Different biorefinery concepts have been proposed for wheat straw. Chang et al. (2018) studied the production of biosurfactants from hemicellulosic sugars, lignin, and methyl levulinate [20]. Yuan et al. (2018) proposed a biorefinery based on Organosolv to obtain lignin, silica, and ethanol [21]. Kaparaju et al. (2008) evaluated the production of bioethanol (from cellulose), biohydrogen (from hemicellulose), and biogas (from effluents from bioethanol and biohydrogen production) [22]. Rebolledo-Leiva et al. (2008) evaluated the environmental assessment of itaconic acid production from wheat straw [23]. As observed, in most of the biorefinery concepts, the pulp obtained after a given pretreatment is used further to produce cellulosic sugars and then go to a fermentation route to produce either biofuels or bio-based chemicals. However, the solubilization of hemicellulose and lignin leaves then a solid fraction enriched in cellulose, which potentially can be used for pulp and paper applications. Malik et al. (2020) tested alkali-, hot-water-, and acid-mediated extraction of wheat straw prior to pulping and papermaking [24].
A sequential combination of pretreatments aiming to valorize hemicellulose and lignin from wheat straw and further test the remaining pulp for papermaking has not been studied. This work consisted of performing a sequential LHW and Organosolv (OS) pretreatment using wheat straw to produce hemicellulosic sugars, lignin, and a celluloseenriched pulp. After completing the sequential LHS-OS, the cellulose-enriched solid was evaluated in a proof-of-concept test for papermaking. This way, an overall picture of the usage of the three main feedstock fractions is addressed. The pretreatment technologies and the subsequent combination of LHW and OS were chosen based on a study previously performed by the authors [16]. LHW focuses on hemicellulose hydrolysis, and it only uses water as a reactant and is auto-catalyzed by the released acetic acid from the hemicellulose backbone. OS enables solubilizing part of the hemicellulose and removing most of the lignin. With this sequential pretreatment, three intermediate products can be obtained: an extract mainly composed of hemicellulosic sugars, an extract with lignin, and the final pulp with reduced lignin and hemicellulose content. After the pretreatment, the obtained pulp was mixed in different formulations with pure cellulose pulp, and the resulting paper properties were determined. Untreated straw and pure cellulose pulp were subjected to the same paper production process to compare the obtained properties.
Raw Materials and Reagents
Wheat straw used in this work was harvested in 2019 (Margarethen am Moos, State of Lower Austria) and stored under dry conditions and at room temperature. The straw was milled in a cutting mill. The fraction between 0.2 and 0.6 mm was used for the extraction process. The raw material composition was 2.13, 0.67, 35.31, 21.94, 0.72, 17.35, 20.45, and 1.09% (wt.; dry basis) for arabinan, galactan, glucan, xylan, mannan, lignin, extractives, and ash, respectively [16]. The moisture content was 7.16%wt. From 20.45%wt of extractives, 15.97%wt correspond to water extractives, and 4.48%wt to ethanol extractives (characterized according to the NREL/TP-510-42,619) [25]. The feedstock used in this work corresponds to the same batch used in the study reported for the lignocellulosic characterization.
Sequential Pretreatment Stage: LHW Followed by OS
The general pretreatment strategy for this work consisted of performing LHW, washing the solid, then conducting an OS on the washed solid, and finally rewashing it. Three intermediate product streams were obtained: a hemicellulosic sugar extract, a lignin extract, and a cellulose-enriched pulp. The obtained extracts were characterized for sugars, degradation products, and lignin. As LHW and OS focus on hydrolyzing mainly hemicellulose and lignin but not on cellulose hydrolysis, the remaining pulp should be enriched in cellulose, which would indicate a potential use for papermaking. Therefore, we tested the produced pulp in a proof-of-concept test of papermaking.
Sequential Pretreatment
The sequential pretreatment follows the strategy proposed by (Serna-Loaiza et al., 2021) [16], with certain modifications that will be explained as follows. Figure 1 shows the general scheme of the process performed in this work. The general procedure was feeding the raw material to the reactor and carrying out the LHW extraction. Then, the mixture was separated (pressing), the extract was collected (LHW extract), and the solid was washed with water at 50 °C. The solid was pressed again, the washing was collected (LHW Washing), and the solid was used to carry out the second pretreatment (Organosolv). The pressing and washing process was repeated twice: the first wash was done with 60%wt aqueous ethanol at 50 °C and the second wash with water at 50 °C.
The main differences with the previously reported study are: (1) LHW pretreatment was carried out at 160 °C with a holding time of 90 min (instead of 180 °C and 30 min). This decision was made based on another research carried out by the authors using the same equipment and feedstock batch used here. (Serna-Loaiza et al., 2022) showed that performing LHW at a severity factor of around 3.77 (reached around 160 °C and 90 min holding time) generates higher hemicellulose hydrolysis and lower lignin hydrolysis when compared to a severity factor of 4.05 (reached around 180 °C and 30 min holding time). A more detailed description can be found in the cited publication [26]. (2) The intermediate washing steps were carried out for two reasons: first, to remove the hydrolyzed sugars, lignin, and degradation products that might remain in the solid and therefore ending up with a cleaner solid; and second, to have a better quantification of the yields of each separate pretreatment stage. As some of the extracted components in the LHW remain in the moisture of the solid, this could mean an increased extraction yield of the OS step is not necessarily related to the hydrolysis reached in this stage.
Both LHW and OS were carried out in a stainless-steel high pressurize autoclave (Zirbus, HAD 9/16, Bad Grund, Germany), stirring at 200 rpm. The initial dry mass of wheat straw used for the LHW was approximately 35 g (38.58 g wet mass), with a solid/liquid ratio of 1 g of dry solid per 11 g of solvent. The moisture content of the solid was subtracted from the prepared solvent. The reactor was heated to 160 °C and cooled down after the 90 min holding time. Subsequently, the solid and liquid fractions were separated using 1 3 a hydraulic press (Hapa, HPH 2.5) at 200 bar and a centrifuge (Sorvall, RC 6 +) at 24 104 g for 20 min. The extract's density was determined using a density meter (DE45 Del-taRange, Mettler Toledo, Columbus, United States). The supernatant was stored at 5 °C until further analysis. The solid pressed fraction was submerged in water at 50 °C and manually disintegrated in the water for 5 min. The amount of water used for washing corresponded to the same amount used for the solvent without correcting the moisture content (385 g). Then, the solid was pressed, the wash collected and stored at 5 °C for analysis, and the solid was stored at -5 °C. In total, 12 repetitions of the LHW stage were performed. The extracts were mixed in four groups of three samples (LHW 1-3, 4-6, 7-9, and 10-12). All solid samples were thawed to room temperature and mixed. In total, 736.4 g of solid were collected with a dry matter content of 37.07%wt.
OS-Water Wash
Water Wash The constant 14.75 corresponds to an empirical parameter calculated assuming an overall reaction following first-order kinetics and Arrhenius relation of temperature [27]. Equation 1 was solved numerically by the trapezoidal rule with a ∆t of 1 s.
The collected solid was used for the OS stage, which was carried out using 60%wt aqueous ethanol at 180 °C [28]. The solid/liquid ratio was 1 g of dry solid per 11 g of solvent, and the moisture content of the solid was subtracted from the prepared solvent. The total operation time was fixed at 60 min (heating time of approximately 45 min and holding time of 15 min). After the extraction, the separation of liquid/solid fractions (press and centrifuge), solid washing and pressing, and storage were done as described for the LHW stage. The washing, in this case, was performed first using 60%wt aqueous ethanol at 50 °C followed by water at 50 °C. In both cases, 385 g of solvent were used. In total, eight repetitions of the OS stage were performed. The extracts were mixed in three groups (OS 1-3, 4-6, and 7-8). All solid samples were thawed to room temperature, mixed, dried at 40 °C, and used for the papermaking tests.
Product Characterization
The sugar extract, lignin extract, and respective LHW and OS washes were characterized for sugars, degradation products, ash, and lignin. Sugars and degradation products were characterized according to the NREL/TP-510-42,623 [29]. Monomeric sugars were analyzed using HPAEC-PAD (ICS-5000, Thermo Scientific, USA) with deionized water as eluent. Oligomeric sugars were hydrolyzed (diluted sulfuric acid) at 120 °C and analyzed as monomers. A sugar recovery standard was used to account for losses. Furfural, HMF, and acetic acid were determined using HPLC (LC-20A HPLC system, Shimadzu, Japan) by UV and RI detection with a Shodex SH1011 analytic column at 50 °C with 0.005 M H 2 SO 4 as mobile phase. The lignin concentration was measured as acid-soluble lignin (ASL) and acid-insoluble lignin (AIL). The extract was dried, and the solid was submitted to the protocol established in the NREL/TP-510-42,618 [30]. Extraction yields were calculated based on the measured concentrations, the solid-liquid ratio, and the density of the extract and are reported in weight percentage (dry basis) using Eq. 3. Y i is the extraction yield of component i per added wheat straw on dry basis in weight percentage (%wt), Conc i is (2) TotalR 0 = R 0,Heating + R 0,Holding + R 0,Cooling the concentration of the measured component in [mg/L], SL ratio is the solid-liquid ratio (1 g of dry wheat straw per 11 g of solvent), and Extract is the density of the respective extract in [g/mL].
Proof-of-Concept for Pulp Evaluation: Formulations for Papermaking
The pulp resulting from the sequential pretreatment stage was used for papermaking to replace pure cellulose pulp. The papermaking was carried out in collaboration with a company dedicated to producing specialty papers. However, the company decided to remain anonymous for commercial reasons. The evaluation consisted of forming paper sheets with varying proportions of pretreated wheat straw pulp (Pret-WSP) combined with milled birch kraft pulp (BKP) typically used for papermaking. The formulations were 10, 20, and 30%wt of Pret-WSP with the remaining fraction of BKP. For comparison, sheets with 100% BKP and the same formulations with untreated wheat straw raw pulp (WSP) were also pressed.
The initial characterization of the wheat straw pulps (both Pret-WSP and WSP) determined the Beating Degree (Schopper-Riegler number-°SR) according to the ISO 5267-1:1999. SR number provides an idea of the degree of refining/freeness related to the drainage rate of a dilute pulp suspension. Paper sheets were formed with a Rapid
Results and Discussion
This work aims to provide an integral analysis of all streams resulting from the intermediate refining of wheat straw, following a sequential pretreatment of LHW and OS. The first section of results will focus on characterizing the extracts (sugar and lignin extracts) resulting respectively from the LHW and OS stages. The characterization is done for sugars, degradations products, and lignin. Then, the second section focuses on the evaluation of the formulations for papermaking. Figure 2 shows the characterization of the extracts obtained during the sequential combination LHW-OS. Figure 2a, Fig. 2 Extraction yields of the LHW and OS stages for sugars, degradation products, and lignin. Numbers above bars indicate the average value, and error bars the standard deviation. AIL: Acid Insoluble Lignin; ASL: Acid Insoluble Lignin; Total Lignin: AIL + ASL. Yield expressed as weight percentage (%wt)-dry matter wheat straw b, and c corresponds to sugars, degradation products, and lignin and ash, respectively. The information used for the calculation can be found in the Supplementary Material S1. The first analysis carried out for the LHW stage is sugar production (monomeric and total sugars). We calculated the respective total concentration of sugars (C5 plus C6) by summing up the C5 (arabinose and xylose) and C6 (galactose, glucose, and mannose) concentrations for both monomeric and total sugars. When comparing the extract and respective wash (LHW Wash), the concentrations of almost all components were 10-20% of the concentration reached in the LHW, except the monomeric C6 sugars (34%) and total C6 (9%). These results indicate the importance of the washing step, as these are all components resulting from the hydrolysis step that would have remained in the solid and would have accumulated in the subsequent processing stage. This additional input of sugars and degradation products would instead represent contamination in the OS stage, as the sugars would remain in the extract further in the downstreaming and purification of the lignin [31]. Further, in this direction, if the lignin would not be removed with the washing, this could represent an increase in extraction yield of lignin in the OS stage, as more lignin would be available in the stage to be purified. On the other hand, LHW could be solubilized in small fragments that may re-polymerize, which could mean they become insoluble to OS. Therefore, this washing step should be studied further to define the technical benefit of washing/not washing between the LHW and OS stages, as it becomes a tradeoff between an increased concentration of lignin and sugars/degradation products, and the quality of the lignin. When comparing the results obtained in this work with previously reported studies, similar sugar concentrations were achieved (between 12-18 g/L of total hemicellulosic sugars), and degradation products concentrations around the same orders of magnitude (~ 1 g/L for acetic acid and between 0.5-1 g/L for furfural) [6,17,32]. The obtained lignin extraction yields were higher than those reported in the base study for this work (~ 2 g/L of both AIL and ASL, for total lignin of ~ 4 g/L) [16]. A comparison of the obtained extraction concentrations for the LHW stage between this work and [16] can be found in the Supplementary Material S2. As previously mentioned in the methodology, the targeted severity factor for the pretreatment was 3.77, yet the actual value was higher: the average severity factor of all the extractions was 3.95 ± 0.01. From this value, 4% corresponded to heating, 91% to the holding time, and 5% to cooling. However, during the holding time, the temperature increased until reaching 170 °C after the 90 min holding time, causing the increase in the severity factor from 3.77 to 3.95. The average temperature profile of the different LHW extractions used to calculate the severity factor is shown in Supplementary Material S3. As mentioned before, (Serna-Loaiza et al., 2022) studied LHW through a complete combinatorial study on temperature (160, 180, and 200 °C) and time (30,60, and 90 min) and determined sugars, degradation products, and lignin at each one of these conditions, meaning a given severity factor [26]. In this study, we obtained a severity factor in between one of the points of the previously cited study and found a higher delignification than expected in the LHW stage. Therefore, we complemented the results reported in the mentioned publication to include our work's obtained lignin and hemicellulose hydrolysis yield (Fig. 3).
Sequential Pretreatment Stage: LHW Followed by OS
The goal of our work focuses on the integral valorization of wheat straw up to evaluating the final solid for papermaking. However, because of the conditions worked on the LHW stage, we identified that there might be a peak in lignin hydrolysis between severity factors of 3.80-4.00. This information is highly relevant for combinatorial pretreatments as LHW-OS. Ideally, as much lignin as possible should remain in the solid after the LHW stage to be hydrolyzed in the subsequent OS stage [26]. Therefore, studying in more detail the LHW stage in the mentioned range of severity factor should be addressed in further research. Regarding the higher delignification than expected in the LHW stage, there are two approaches to analyze these results. On the one hand, the efficiency in components distribution in the dedicated extracts; on the other hand, the overall delignification of the final solid. Regarding distribution efficiency, Fig. 3 Lignin and hemicellulose hydrolysis at different severity factors. Data points framed in red correspond to this work; the other data points are adapted from Ref. [26] the severity factor reached in the LHW stage hydrolyzed much more lignin than expected, which on the other hand, decreases the net amount of lignin to be extracted in the OS stage, which is dedicated to this purpose. This is a drawback of the pretreatment conditions, as the lignin extracted in the LHW cannot be readily valorized further and would represent a lost component going into the sugar extract. Regarding the overall delignification, this result can still lead to higher overall delignification of the final solid even though the lignin is not distributed as desired. The overall results of the combined pretreatment depend on the OS stage; therefore, this will be analyzed more thoroughly as follows.
For the OS stage, as expected, the concentration of sugars is less than the LHW stage: monomeric sugars are almost zero, and total sugars (C5 + C6) are 32% of the value reached in LHW. The composition of the total sugars was different, as in this case, the highest share corresponds to C6 sugars (81%). This can be explained because the highest share of hemicellulosic sugars is pentoses, which are primarily hydrolyzed in the LHW stage, leaving the remaining fraction of hexoses in hemicellulose to be extracted in the OS stage. When comparing the extract with the ethanol wash, monomeric sugars, degradation products, and ash were between 10-15% of the concentration reached in the extract; total sugars and lignin reached 27-36% of the concentration of the extract. All values were below 10% of the extract concentration for the water wash, except for total sugars (with concentrations of 48-55% of the extract). These results further corroborate the importance of the washing steps in terms of final solid purity and valorization of the components that remain in the solid. Specifically for the case of lignin, the ethanol wash removed a concentration equivalent to 35% of the concentration in the extract. The lignin in the ethanol wash could be mixed with the extract and further precipitated, increasing the overall valorization yield of the lignin. For the case of the water wash, the total sugars concentration is more than 50% of the OS extract and even 17% of the concentration in the LHW extract, with almost no lignin and degradation products. This stream could be mixed with the LHW extract to increase hemicellulose valorization into sugars. Finally, in terms of the final solid, these washing steps significantly decrease the remaining components on the solid, which means a solid with less hemicellulose and lignin content.
Comparing the results obtained in this work with previously reported studies, similar sugar and degradation product concentrations were achieved (around 4 g/L of total sugars and low concentrations of degradation products) [28,31,33]. The obtained lignin extraction yield is similar to the values reported in the referenced studies (~ 7 g/L); however, these studies correspond to standalone OS extractions and not sequential treatments. Compared to the base study for this work, lignin concentrations were lower (~ 7 g/L compared to 10 g/L of both AIL and ASL) [16]. A comparison of the obtained extraction concentrations for the LHW stage between this work and [16] can be found in the Supplementary Material S2. As analyzed in the previous section, the conditions chosen for the LHW stage resulted in a higher lignin concentration than expected, resulting in no increased lignin extraction for the OS stage. However, the pulp's overall delignification reached in this work (summing extracts and washes concentration) accounts for 17.8 g/L (corresponding to 89.9% of removed lignin). This value is higher than the overall delignification (~ 15 g/L, 77% of removed lignin) reported by (Serna-Loaiza et al., 2021) [16].
As expected, the LHW stage has a higher sugars concentration (13 g/L of total sugars for the sugar extract compared to 4 g/L for the lignin extract), and the main contribution of sugars comes from pentoses, specifically xylose for the sugar extract and mannose for the lignin extract. Xylan is the major oligomer of hemicellulose from wheat straw, as observed in the characterization reported in the methodology. This is hydrolyzed in the LHW stage, leaving minor hemicellulose fractions to be hydrolyzed in the subsequent OS stage. A complete characterization of each one of the quantified sugars is presented in Supplementary Material S4. These results indicate the importance of the technological design of the process for upscaling. In situ pressing allows the lowest possible moisture content in the pretreated solid by removing as much extract as possible and solvent change for a washing step are features of high relevance for an overall integral valorization of the feedstock.
Composition of the Solids after the LHW-OS Pretreatment
The next step consisted of calculating the composition of the final solid based on the liquid fractions, hence identifying the distribution of components along the stages. After the LHW stage, the values obtained for the OS stage (extract, washes, and solid) were scaled proportionally based on the solid leaving the LHW and the respective moisture content. The liquid fractions' density was 1.01, 1.00, 0.90, 0.89, and 0.99 g/mL for the sugar extract, LHW wash, lignin extract, OS ethanol wash, and OS water wash, respectively (Fig. 4). We provide all the information related to each extraction's mass balance in Supplementary Material S3. During the experimental tests, the collection process, pressing, and centrifugation implicated certain material losses, which accounted for 8%wt on the LHW stage and 5%wt on the OS stage, compared to the initial total loaded mass. In this theoretical mass balance, we assumed that there were no losses, and these were added to the respective extracts. Furthermore, we assumed that the glucose hydrolyzed in the pretreatments corresponded to glucose in the hemicellulose and not to cellulose degradation. This assumption can be 1 3 supported by previous studies indicating that LHW and OS pretreatment do not hydrolyze the cellulose fraction [24,34]. Figure 5 shows the distribution of cellulose, hemicellulose, lignin, extractives, and ash, along with the different fractions of the sequential pretreatment. Degradation products were backward calculated to sugars, and the total amount of sugars were converted into the respective oligomer and, therefore, determined the solids' composition in the different stages. We assumed that glucan corresponds to cellulose and the other carbohydrates (arabinan, galactan, xylan, and mannan) as hemicellulose. Extractives were distributed between the LHW and OS extract according to the composition of water/ethanol extractives referred to in the Methodology. An extraction yield of 5.1%, 51.3%, 89.9%, 75.0%, and 100.0% was achieved for cellulose, hemicellulose, lignin, ash, and extractives. These values were calculated as the fraction in the final solid (LHW-OS solid) compared to the mass of the component in the feedstock. The final solid has a composition of 69.9, 25.9, 3.7, 0.1, and 0%wt of cellulose, hemicellulose, lignin, ash, and extractives, respectively, which represent 48.1% of the initial mass of the feedstock. The LHW extract solubilized and hydrolyzed 40.2% of the initial hemicellulosic sugars and 42.4% of the lignin contained in the feedstock. On the other hand, OS extract contains 30.4% of the lignin from the feedstock. The drawbacks and benefits from the obtained yields for sugars and lignin were previously discussed.
Another topic of special interest is the ash distribution along the pretreatment stages. Wheat straw has a higher content of ash compared to other feedstocks used for papermaking (e.g., wood) and specifically a higher content of silicates [35], which can be detrimental to the use life of the papermaking machinery [24]. Figure 6 shows the calculated absolute mass of ash for each stage, and as observed, 61% of the ash contained in the feedstock is being extracted in the LHW stage (extract and wash), and an additional 13% is extracted in the OS stage (extract and washes). The overall removal of ash reached 74%, which also shows the favorability of the obtained pretreated pulp for papermaking.
Papermaking from the Solid after the LHW-OS Pretreatment
After the production of the final pulp and evaluation of the respective streams, the following step consisted of evaluating the obtained pulp for papermaking, with varying proportions of pretreated wheat straw pulp (Pret-WSP) and untreated wheat straw pulp (WSP) in combination with milled birch kraft pulp (BKP). BKP is typically used for papermaking, and WSP was used as a control to evidence the influence of the pretreatment. The first measured indicator from the pulps was the Beating Degree (BD). BD was 70, 11, and 35°SR for WSP, Pret-WSP, and BKP, respectively. The next step consisted of evaluating the different paper formulations and the properties of each paper. Figure 7 shows the obtained paper sheets, and Table 1 the respective characterization. We can observe that the formulations with Pret-WSP have a more remarked change in color than the WSP formulations. In both cases, with an increase in the share of either WSP or Pret-WSP, the paper has a granular appearance. Strength properties decreased with the increasing content of Pret-WSP or WSP. Air permeance indicates the porosity of the produced sheet. This parameter increased by 29, 122, and 333% with 10, 20, and 30% inclusion of Pret-WSP, compared to BKP. Now, comparing the values obtained for WSP and Pret-WSP, we observed that the pretreatment improves the pulp quality, reaching lower air permeance and thickness values. Among the strength properties, bursting and tensile strength play a key role, indicating the resistance to stress from the produced paper sheets. The other measured strength properties (surface strength, tearing index, and strain at break) showed the same trend. Bursting index indicates the pressure paper can tolerate before rupture, and the tensile index represents the tensile force required to produce a rupture in a paper strip. Figure 8 shows the (a) bursting index and (b) tensile index for the different BKP/WSP and BKP/Pret-WSP formulations. The bursting index decreased 30, 47, and 58% with 10, 20, and 30% inclusion of Pret-WSP, compared to BKP. A similar decrease was observed for the tensile index (17,38, and 50% for 10, 20, and 30% inclusion of Pret-WSP). Additionally, the values obtained for WSP compared to Pret-WSP show that the pretreatment improves the quality of the pulp, increasing both the bursting and tensile strength.
Considering an overall performance between these sheet properties, we observed that despite the studied sheet properties' values increasing with the inclusion of Pret-WSP, the values obtained for the 10% Pret-WSP still have comparable values with those of BKP. After evaluating the different pulp formulations, it is evident that the sequential pretreatment at the current state by itself does not provide a pulp directly usable for papermaking. Based on this, a detailed analysis should be carried out to identify the most suitable pulp type to be replaced. There are different pulp grades used for paper production; among these, mechanical pulp contains other components than cellulose. A LHW-OS pulp could be used to replace this type of lignin-rich pulps. Packaging [24,[36][37][38][39]; however, the technological approach consists typically of other technologies (soda, ammonia, kraft pulping, and bleaching, among others) generally used in the papermaking industry. Hence, the results are not directly comparable, especially considering the influence of the chosen pretreatment strategy on the quality of the final pulp and the usability of the other streams. With the reached delignification and hemicellulose solubilization on the final pulp, it is still necessary to undergo further pulping to have a pulp directly usable for papermaking. However, this implicates further processing stages that should be developed and optimized and the evaluation of other variables as the initial particle size of the wood. The focus, in this case, should not be then on the production of pulp directly usable for papermaking but on the replacement of pulp, which, as observed, up to a 10% Pret-WSP/90% BKP showed both physical and strength properties suitable for papermaking. The production of chemical wood pulp in 2019 was approximately 149 and 27 million tons in the world and the European Union (EU-28), respectively [40]; the potential even of a minor replacement at this scale could have a significant impact. This replacement must be further studied from a technical, economic, logistic, and environmental perspective.
Conclusion
This work tested the sequential LHW-OS pretreatment of wheat straw to valorize hemicellulose, lignin, and the final pulp. Even though wheat straw pulping through conventional processes (e.g., Kraft, soda, and bleaching) has been studied previously, the results are not directly comparable, especially considering that the chosen pretreatment influences the quality of the final pulp and the usability of the other streams. We evaluated both the obtained liquid fractions resulting from the LHW and OS, respectively, and the use of the final pulp for papermaking, covering additional steps in developing a biorefinery from wheat straw. Particular focus was given to the evaluation of the final pulp for papermaking. An extraction yield of 5.1%, 51.3%, and 89.9% was achieved for cellulose, hemicellulose, and lignin, respectively, corresponding to a sugar extract with ~ 13 g/L of hemicellulosic sugars, a lignin extract with ~ 7 g/L of lignin and a pulp with 67%wt cellulose (compared to 35%wt in the feedstock). The papermaking evaluation indicated that a formulation with 10% pretreated pulp and 90% pure cellulose kraft pulp has sufficient strength and physical properties.
Multiple areas of improvement were identified, which open further questions based on the results obtained in this work. First, the configuration of the washing steps could implicate technological improvements in the process. After the LHW stage, washing or not washing implicated a tradeoff between an increased concentration of lignin and sugars/ degradation products in the subsequent OS stage; hence, the technical benefit/drawback should be analyzed. For the OS stage, the washing step with ethanol represents an opportunity to increase the overall lignin solubilization yield of the process, as this step mainly solubilized lignin that could be mixed with the OS extract. In addition, the water wash removed sugars that remained in the solid, rendering a cleaner final pulp. Another topic of interest is the conditions for the LHW stage, specifically related to the lignin hydrolyzed at different severity factors. This way, the sequential pretreatment could be tuned to have a higher lignin concentration in the OS extract. Finally, the possibility of pulp replacement from the pulp and paper industry requires an evaluation of technical, economic, logistic, and environmental performance.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2008-02-01T00:00:00.000
|
2447261
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://escholarship.org/content/qt9p9291hz/qt9p9291hz.pdf?t=oemn02",
"pdf_hash": "6b667d9da1a15130f21bd531a06642f7c7b04203",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46449",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "23bd7c72efaf082e9e9ac3f0a51d52c91b7b8dd9",
"year": 2008
}
|
pes2o/s2orc
|
Nutrient Addition Prompts Rapid Destabilization of Organic Matter in an Arctic Tundra Ecosystem
Nutrient availability in the arctic is expected to increase in the next century due to accelerated decomposition associated with warming and, to a lesser extent, increased nitrogen deposition. To explore how changes in nutrient availability affect ecosystem carbon (C) cycling, we used radiocarbon to quantify changes in belowground C dynamics associated with long-term fertilization of graminoid-dominated tussock tundra at Toolik Lake, Alaska. Since 1981, yearly fertilization with nitrogen (N) and phosphorus (P) has resulted in a shift to shrub-dominated vegetation. These combined changes have altered the quantity and quality of litter inputs, the vertical distribution and dynamics of fine roots, and the decomposition rate of soil organic C. The loss of C from the deep organic and mineral soil has more than offset the C accumulation in the litter and upper organic soil horizons. In the litter and upper organic horizons, radiocarbon measurements show that increased inputs resulted in overall C accumulation, despite being offset by increased decomposition in some soil pools. To reconcile radiocarbon observations in the deeper organic and mineral soil layers, where most of the ecosystem C loss occurred, both a decrease in input of new root material and a dramatic increase of decomposition rates in centuries-old soil C pools were required. Therefore, with future increases in nutrient availability, we may expect substantial losses of C which took centuries to accumulate.
INTRODUCTION
Artic tundra soils hold at least 5-6% of the world's soil carbon, although recent estimates suggest that this amount is at least six times higher (IPCC 2001;Horwath 2007).In Europe, these ecosystems are subject to anthropogenic N deposition, and although the rates are generally low (0.1 g m )2 y )1 ), a few places receive up to 1 g m )2 y )1 (Woodin 1997).Additionally, arctic tundra soils are warming rapidly (Overpeck and others 1997;ACIA 2004).As this warming continues, it is expected to affect soil C storage both directly, through temperature responses in microbial decomposition, and indirectly, through feedbacks associated with nutrient availability, as well as, changing surface energy balance and plant species composition (Chapin and others 1995;Hobbie and Chapin 1998;Dormann and Woodin 2002;Weintraub and Schimel 2005;Van Wijk and others 2004).Many of these feedbacks are positive, such as the observed decreases in al-bedo (Chapin and others 2005) and increases in snow depth during the winter months (Sturm andothers 2001, 2005), which lead to further increases in air and/or soil temperatures.Another important set of feedbacks is related to the faster decomposition of organic matter in warmer soils leading to increased nutrient availability (of about 10 g N m )2 y )1) , higher productivity, and changes in plant community composition (Chapin and others 1995;Mack and others 2004).Nutrient addition generally has a positive effect on primary production in vascular plants in the arctic, although which species benefits most depends on the type of system (Van Wijk and others 2004;Dormann and Woodin 2002;Hobbie and others 2005;Weintraub and Schimel 2003).In acidic tussock tundra, nutrient addition often results in a shift in plant species composition from graminoid to shrub species (Chapin and others 1995;Chapin and Shaver 1996;McKane and others 1997;Shaver and others 2001).Shrub species produce lower quality litter and wood than graminoid species, perhaps resulting in slower decomposition (Hobbie 1996) and thus a negative feedback.However, shrub soils have been found to have higher rates of mineralization than their chemistry would predict (Weintraub and Schimel 2003).
All of these feedbacks together can influence ecosystem C storage, although they may do so in opposing ways.The balance of increased aboveground production and changes in decomposition will determine the overall effect of increased temperature and nutrient contents on soil C storage.In a study of moist acidic tundra designed to isolate nutrient availability effects, Mack and others (2004) found that nutrient addition, at levels comparable to expected increases in nutrient availability with warming, prompted C losses in the lower organic and mineral layers that far surpassed the increases in productivity and C accumulation in standing biomass, litter, and the soil surface (0-5 cm) organic layers.This increase in decomposition would be associated with even greater rates of N mineralization (approximately 137 g N m )2 if N mineralized is proportional to the C:N ratio).
We measured radiocarbon contents of archived samples from the Mack and others (2004) study to ascertain whether decomposition dynamics were altered by nutrient addition in ways not discernible from change in C inventory alone.Radiocarbon allows us to determine whether the observed changes in C stocks reflect altered inputs, decomposition rates, or a combination of both.It is especially important to determine how vulnerable large stores of old C stored deep in northern soils could be to increased decomposition under altered nutrient and temperature conditions.
METHODS
The experimental site, part of the Toolik Lake, Arctic Tundra Long Term Ecological Research site in Alaska, is located in the northern foothills of the Brooks Range (68 °38¢N, 149 °38¢W; elevation 760 m).The experiment consists of four replicate 5 • 20 m blocks.The fertilized plots have received 10 g N m )2 y )1 as NH 4 NO 3 and 5 g P m )2 y )1 as P 2 O 5 , since 1981.Moist acidic tussock tundra dominated by the sedge Eriophorum vaginatum was originally present in all plots and continues in the control plots.However, the nutrient addition plots have become dominated by the deciduous shrub Betula nana, which was originally present as a smaller proportion of the plant community (Shaver and others 2001).Roots and soil from control and experimental plots were collected in July 2000.Roots and litter were collected within each plot from five 20 • 20 cm quadrats by cutting down to the mineral soil with a knife.The quadrats were taken 1 m from the edge of the plot and were randomly arrayed along the 20 m length of the plot.Live roots were removed and separated by hand and sorted into coarse (>2 mm) and fine (<2 mm) size fractions.Roots and soil from the mineral horizon were collected from the surface of the mineral soil to the permafrost ($5 cm) using a 2.5 cm diameter corer.A 5 • 5 cm monolith was collected from the edge of the hole for the organic soil analysis.The total organic soil was separated by depth into litter (dead recognizable plant material), 0-5 cm organic (O1), and greater than 5 cm organic (O2) layers.Samples were dried at 65°C, ground, and stored.Further details are reported in Mack and others (2004).
Samples were combusted and the evolved CO 2 was cryogenically purified and converted to graphite using sealed zinc tube reduction and analyzed for radiocarbon content at the W.M. Keck Carbon Cycle Accelerator Mass Spectrometer facility at UC-Irvine (Southon and others 2004;Xu and others 2007).Radiocarbon data are reported as D 14 C, the deviation in parts per thousand (permil, &) of the 14 C/ 12 C ratio from that of a standard of fixed isotopic composition (0.95 times the 14 C/ 12 C of the oxalic acid I standard, decay-corrected to 1950).As reported, D 14 C values are corrected for mass dependent isotope fractionation using the measured 13 C/ 12 C ratio and normalizing to a d 13 C value of )25& (Stuiver and Polach 1977).Negative D 14 C values represent C old enough for radioactive Fertilization and OM Stability decay to have occurred.Due to atomic weapons testing, the D 14 C signature of atmospheric CO 2 , and hence fresh vegetation inputs to litter and soils, reached a high in 1963 near about 900& in the northern hemisphere and has been declining since.Between 1981 and 2000, these values declined from 257 to 90& at a rate of approximately 6-10& y )1 (Levin and Kromer 2004).Consequently, differences in D 14 C within ecosystem C pools reflect differences in the timing of when C was fixed, as well as differences in decomposition time.The accuracy of radiocarbon analyses is ±0.3% (or 3&).The turnover time of soil organic matter (SOM) was determined from a model that tracks C additions to and losses from organic and mineral soil horizons, and best reproduces the observed C inventory and D 14 C content of SOM in control and fertilized soils in 2000 (Gaudinski and others 2000; Figure 1).For the control site, we assumed steady state conditions over the past 19 years (that is, no net gain or loss of carbon in each horizon), and represented organic matter as either a single, homogeneous pool (upper organic layer), or, where required to match observations, multiple organic matter pools with different characteristic turnover times (other layers).Inputs for the steady state model (control) were calculated as the C in each SOM pool divided by its turnover time.
Organic matter in fertilized plots was assumed to have the same inputs, turnover times, and pool structure as the steady state model up to and including 1981, the initial year of simulation (Figure 1).To allow for changes in litter quality associated with vegetation change in fertilized plots, inputs in subsequent years were added to a separate ''New'' pool, except in the litter, where inputs into the graminoid pool (Pool 1) continued at a rate determined from measured aboveground net primary production (NPP).The post-fertilization increase in inputs was calculated as the overall observed increase in NPP (Mack and others 2004) divided by the length of the experiment.For tracking this new C, we assumed a 5-year transition period during which inputs into the original pools dropped to zero and inputs into the new pool increased to their constant final value.
Radiocarbon values in SOM reflect both the time spent in living plant tissues and the residence time of dead plant material in soils.Thus, failure to account for plant residence times can result in overestimation of decomposition rates from D 14 C data (Perruchoud and others 1999).For model inputs into the litter layer, time spent in living plant tissues was set to be 3 years for the graminoid-dominated system and 7 years for the shrub-dominated system [estimates derived from Shaver and Chapin (1991)], with the increase in plant tissue lifetimes occurring gradually following fertilization at a rate of 1 year y )1 over 4 years.In all other soil horizons, the D 14 C value of plant litter inputs was derived from the amount and age of root C at the time of input, as determined by the mean age of live roots in that layer derived from D 14 C of live roots (Gaudinski and others 2001) and the amount of root production in that layer, assuming the amount of production was proportional to the root biomass in that layer (Nadelhoffer and others 2002).Although the root ingrowth cores used to produce these estimates have a number of biases (Vogt and Persson 1991;Fahey and Hughes 1994;Majdi and others 2005), this method of calculating root production produces deep horizon values more similar to estimates from minirhizotrons used in a nearby nutrient addition study (Sullivan and others 2007).Additional carbon inputs were required to satisfy the mass balance requirements of the model, as estimated root inputs were insufficient to support observed C inventories given the turnover times necessary to explain observed D 14 C values.In these cases, we assumed the process involved downward transport of soil (SOM) or dissolved (DOM) organic matter, with the amount of material assumed to equal the difference between root inputs and the total C input required to support observed C stocks for the control plots at steady state.We assumed the time lag associated with these addition inputs was the same as for root inputs because the lags Inputs into the graminoid pool (Pool 1) continue after fertilization because there continues to be some graminoid production, but, to match observations, new shrub/moss litter had to go into a separate pool than old shrub/moss litter.We ran the model iteratively for the upper organic, lower organic, and mineral horizons using the range of possible values for the turnover time and pool size for Pools 1 and 2 and the ''New'' Pool (which accumulated during the experiment).We report the range of values which best predict observed C and D 14 C values for each soil layer.In the litter horizon, we used the 1982 ratio of graminoid to shrub NPP to determine the pool sizes and allowed only solutions with input values within 25% of those measured.We report the range of values that match the observed C inventory in 1981 and 2000 and D 14 C content in 2000 within 1% for all control layers, 2% for the fertilized litter, and 6% for all other layers.We include only the steady state values that allowed us to match observations in both control and fertilized cases.The effects of fertilization on root radiocarbon were analyzed by horizon using one-way ANOVAs.Soil carbon changes were analyzed using a two-way ANOVA, with depth and treatment as the independent variables.Statistical analysis was performed using SPSS statistical software.
RESULTS
Roots: Root biomass distribution and radiocarbon signatures changed in response to nutrient additions (Figure 2A).Mack and others (2004) found that depth-integrated root biomass did not change, but a large portion of root biomass shifted from the lower horizons to the upper horizons in response to nutrient addition.In the litter layer, 14 C of live roots, which we infer reflects root age (Gaudinski and others 2001), was consistent with ages that increased from 1 year in the control plots to 7 years in the fertilized plots (P = 0.10, F 1,4 = 4.40).No significant change in 14 C-derived root age was observed in the surface organic horizon (13 years in the controls and 11 in the fertilized plots).In contrast, the 14 C-derived root age in the deeper organic layer decreased from 15 (control) to 8 (fertilized) years (P = 0.04, F 1,4 = 8.81).Root ages in the mineral soil were unchanged (5 years).
Soil: Changes in SOM stocks in fertilized plots greatly exceeded those for roots (Mack and others 2004).Both C content and 14 C decreased with depth in soils (Figure 2B).We found higher positive 14 C in the litter and the surface organic horizon in response to fertilization, whereas in the deeper organic and mineral horizons 14 C levels were lower in the fertilized plots (Figure 2B).There was a significant relationship between 14 C and treatment when changes in 14 C with depth were accounted for (depth P < 0.001, F 3,24 = 65.92,treatment P = 0.960, F 1,24 =0.00, treatment • depth P = 0.036, F 3,24 =3.33, two-way ANOVA).
Explaining the high 14 C values observed in the upper layers of the treatment plots requires both increased stores of recent organic matter and decreased decomposition of pre-treatment shrub/moss organic matter (Table 1).The litter layer originally had a turnover time of about 4 years for graminoid litter and 6 years for shrub/moss litter.Following N addition, turnover times of pre-treatment shrub/ moss litter increased to 20-30 years, whereas the turnover times for graminoid material remained similar to control values.Post-treatment shrub inputs had slightly faster turnover times (3-4 years) than the controls (6 years) (Table 1).Due to increased input rates, there was a build-up of recent organic matter, despite the similar or even faster turnover times for most of the pools (Figure 3).Additionally, to match the measurements, it was necessary for post-fertilization shrub inputs to have different turnover times than pre-treatment shrub inputs (that is, to be modeled as a separate pool).
The surface organic horizon initially had a turnover time of 45 years, but after fertilization, the turnover times of the original material had to drop to 29-32 years to reproduce both the observed C and 14 C values (Table 1, Figure 4).To explain the measurements, nearly all of the post-fertilization inputs had to be retained in the model; that is, most of what accumulated between 1981 and 2000 remained relatively undecomposed (Table 1, Figure 3).
In the deeper organic horizon (>5 cm), a twopool model was necessary to reproduce the observations.Solutions were found when the proportion of C in the younger pool (Pool 1) ranged from 0.5 to 0.9, whereas in the mineral horizon solutions were found when Pool 1 C proportional abundance ranged from 0.5 to 0.7.The remainder of the C is assumed to be in Pool 2 prior to treatment.In the deeper organic soil, the average turnover time was approximately 90 years, which was divided into a faster pool (Pool 1) with turnover time of 25-75 years and a slower pool (Pool 2) with turnover time of 100-900 years.After fertilization, Pool 2 remained unchanged, whereas the turnover time of Pool 1 dropped from 25-75 to 5-30 years, thus decreasing dramatically in size since 1981 (Table 1, Figure 3).Like the upper organic horizon, most of the newly added C was retained (Table 1).
Similarly, the mineral horizon had a mean turnover time of 1,200-1,300 years.To match posttreatment observations, it was divided into two pools with turnover times of 150-600 years (Pool 1), and 2,000-4,000 years (Pool 2).Again, reproducing observed C losses and 14 C values in the fertilized plots required dramatic decreases in turnover times, to 10-20 years (Pool 1), whereas turnover times for Pool 2 remained long (100 + years).As in the layer above it, most of the newly added C was retained (Table 1).In both layers, it was the pool with the faster turnover (decades-centuries vs. centuries-millennia) which experienced the dramatic C losses (Figure 3).
Given the reduction in soil C inventory, SOM influxes to the lower layers following fertilization were reduced as much as possible, but rapid acceleration of decomposition in Pool 1 was still necessary to explain observations.In this model, SOM inputs are indistinguishable from root inputs and use the same time lag.We did not know the exact rate of root inputs, thus some of the assumed SOM inputs could be an underestimation of root production.Given the amount of C in these layers relative to inputs, the assumed time lag was inconsequential and, when changed, did not alter the overall findings.
DISCUSSION
C inventory and 14 C content measurements integrate multiple processes.For example, the decline in C storage and 14 C content in the deeper soil may result from a combination of: (1) a decline in root litter inputs into these layers, (2) changing root turnover times, and (3) the loss of the more labile C, leaving behind the older, recalcitrant C. Therefore, we used our radiocarbon measurements and model results along with previously published data from this experiment to distinguish between Pool 1 represents the entire pre-treatment C pool or, when 2 pools are present, the faster cycling C pool, whereas Pool 2 represents the slower cycling C pool.The control is assumed to be in steady state (Pools 1 and 2 do not change inventory with time).In the control scenario, Pool 1 contains 50-70% of the total C in the mineral horizon and 50-90% in the >5 cm organic horizon.Pool 2 contains the remaining C. The New Pool represents post-1981 C inputs in the fertilized treatment.Numbers greater than 20 y in the New Pool do not reflect actual turnover times, but rather, indicate C is being retained.The amounts of C in these pools are seen in Figure 3.
changes in inputs and those due to altered decomposition of the pre-1981 material.
Our analysis was constrained by the fact that C accumulation in litter and surface organic horizons since fertilizer treatment began in 1981 will have 14 C contents reflecting both decomposability of the added litter and changes in plant biomass lifetimes.For example, fast turnover (<10 years for combined plant + decomposition times) means that the most recently added material forms the bulk of the C pool because additions in the first decade following 1981 having largely decomposed.The accumulated SOM pool would have a 14 C value close to average atmospheric values over the last decade (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000), between 90 and 110&.In contrast, accumulation of all litter added since 1981, without any decomposition, would result in a SOM radiocarbon value close to the two-decade atmospheric mean of approximately 140&.To produce the same carbon inventory in 2000 would require substantially higher C inputs in the fast turnover case, compared to the slow decomposition case.Our radiocarbon measurements incorporate both the changes in the SOM present prior to the fertilizer treatments and the accumulation of postfertilization plant inputs to the SOM pool (tracked as New Pool in our model).Changes in the decomposition rate of SOM from either of these pools will affect the overall C and 14 C inventories.
Upper soils: The C storage and 14 C values in both the litter and surface organic soil increased with fertilization.Other studies have shown that nutrient additions in acidic tussock tundra caused increased litterfall (Mack and others 2004) and a species shift (Chapin and others 1995;Chapin and Shaver 1996;McKane and others 1997;Shaver and others 2001) associated with decreased litter quality (Hobbie 1996), which both lead to C accumulation in the litter layers.Shifting patterns of belowground allocation, including increased root biomass in the upper layers of fertilized plots and decreased biomass in the lower layers, have also been documented (Mack and others 2004).Changes in root biomass and depth distribution likely reflect the change in plant species composition, whereby shrubs have different belowground C allocation strategies than their graminoid predecessors (Jackson and others 2000).The increase in root inventory in the surface layer was coupled with a possible increase in the radiocarbon values of live root C (Table 1, Figure 2A), suggesting an overall increase in root longevity and a longer time lag for inputs from woody/shrub root litter sources to SOM.In consequence, overall root inputs to this horizon would be lower following fertilization, despite the increase in biomass.It is unclear what the actual inputs would be in any horizon because radiocarbon values reflect the mean age of the root C at a given time point.From minirhizotron studies, we know that some roots live several years, whereas others live days and are not captured in a biomass harvest, thus skewing the data toward lower production estimates and higher turnover times (Tierney and Fahey 2002;Madji and others 2005).However, root ingrowth cores also miss any production or mortality that occurs between sampling dates (Nadelhoffer and others 2002;Madji and others 2005).Using the radiocarbon estimates, assuming that productivity is equivalent to biomass • turnover time )1 , litter layer root production in the fertilized plots was only 36% of that in the controls.Like the litter layer, root biomass in the upper organic soil was increased, but little change was seen in the turnover times, suggesting an increase in root C inputs proportional to the change in biomass (Table 1, Figure 2A).
The SOM radiocarbon signatures were higher in the fertilized litter and upper organic horizons (Figure 2B), reflecting a combination of increased time spent in living plant tissues, accumulation of C, since 1981, and changes in decomposition rates of the SOM present before treatment began (Ta- ble 1, Figure 4).A decrease in decomposition of pre-treatment shrub/moss litter was necessary to explain the changes observed in the litter, whereas decomposition rates of newer more labile material either remained the same or increased.A decline in old shrub litter decomposition is consistent with frequent observations that high quality (low lignin) materials are destabilized by N additions, whereas lignin-rich litter is stabilized (Berg 1986;Fog 1988;Berg and Matzner 1997;Carreiro and others 2000;Sinsabaugh and others 2002;Knorr and others 2005).Berg and Matzner (1997) suggest that the stage of decomposition is critical in determining the overall effects of N addition.They found that N enhanced decomposition in early stages where it is dominated by cellulose and solubles, whereas in later stages, where it is dominated by more recalcitrant compounds, N hindered decomposition (Berg and Matzner 1997).Direct observations of lignin-degrading and cellulose-degrading enzymes also support this claim (Carreiro and others 2000;Sinsabaugh and others 2002;Frey and others 2004).If microbial activity is experiencing N limitation, N addition would clearly result in enhanced decomposition of easily decomposable C.There are at least two possible reasons why N may lead to increased storage of recalcitrant compounds: (1) White rot fungi have the ability to down-regulate the production of lignolytic enzymes in the presence of N, and (2) N may react with lignin and aromatic compounds making more recalcitrant compounds (Berg and Matzner 1997;Nommik and Vantras 1982).In tundra soils, where decomposition is limited by both nutrient availability and temperature, it appears that deeper soil horizons contain substantial amounts of accessible carbon, despite their age.However, moss decomposition tends to proceed more slowly than lignin-content alone would indicate (Hobbie 1996) and may not be affected by nutrient additions in the same way.Moss production virtually ceased following fertilization, but given moss decay rates, some pre-treatment material could have remained in the litter layer, thus explaining the long turnover times of litter in Pool 2 following treatment, as well as why it was necessary to treat post-fertilization shrub inputs as a separate pool.The decrease in turnover times of the post-treatment shrub litter pool compared to the control shrub/moss pool could be due to the loss of slow-decaying mosses following N + P addition or an increase in decomposition, either due to fertilization, or to the loss of mosses, which can reduce overall decomposition rates due to the production of tannins (Painter 1991).
To match measured 14 C values in the upper organic horizon following treatment, it was necessary to accelerate decomposition of the pre-treatment material (Table 1, Figure 4).Therefore, the increased C inventory in the upper organic layer occurred despite increased decomposition of pretreatment litter and indicates that inputs into this layer were higher than suggested by the change in C inventory alone.Turnover times of the New pool were very long, which implies that nearly all new C inputs were stored and also suggests that inputs into this layer following treatment were likely higher than we assumed here.Nonetheless, even when we increased new inputs, it was still necessary to accelerate decomposition in the pre-treatment material to explain the observations.
Deep soils: C stocks as well as 14 C values in the lower organic and mineral horizons declined substantially in response to N + P addition.Declines in root inventory in the deep soils were accompanied by either a decrease in apparent root lifetime or no change (Table 1, Figure 2A).In nutrient addition plots, fewer roots with shorter lifetimes were found in the deep organic horizon.The combination of a smaller pool with faster turnover suggests that total C allocation to this horizon could remain unchanged.In the fertilized plot mineral horizons, fewer roots with no change in lifetimes indicate C allocation to this layer has decreased.Because root production was low in relation to the SOM storage in these horizons, any changes in C inventory resulting from shifts in root allocation were relatively small.As a result, we were unable to match measured and modeled C and 14 C values without assuming additional inputs.SOM/DOC input rates tended to be similar to root input rates in most horizons.This suggests that either SOM/DOC transport is an important C transfer pathway in these soils or that root inputs are decoupled from root stocks and are thus greater than calculated here.
Radiocarbon signatures in SOM from the lower soil horizons decreased with fertilization (Figure 2B), which could reflect a combination of the loss of high 14 C root inputs, altered root turnover times, and the loss of C through changes in SOM decomposition rates.As roots, like leaves, are comprised of recently fixed C, a reduction in root inputs into the deeper soil horizons would result in less C with the 14 C signature of the recent atmosphere entering the deep SOM pools.However, root production supplies only 34 g C m )2 y )1 to the whole profile (Nadelhoffer and others 2002).Only 74 g C m )2 of root biomass were lost in response to fertilization in the deep organic soil and 20 g C m )2 were lost in the mineral soil, as compared to total SOM losses of 1,169 and 2,046 g C m )2 from these layers, respectively.Although the changes in root production, distribution, and turnover must have contributed to the changes in 14 C of the soil organic matter, the direct effect of decreased root inputs alone is insufficient to explain the loss of SOM or the change in 14 C signature in the deeper soil horizons.To explain the amount of C loss and changes in 14 C signatures, it was necessary to dramatically accelerate decomposition, in spite of using the lowest allowable input rates in the calculations.
Modeling C and 14 C requires some acceleration of decomposition rates of pre-treatment organic matter in response to fertilization in all horizons, even ones in which C inventory increased.What caused increased decomposition in treatment plots and might the same results be expected in other ecosystems?Nitrogen concentration directly influences decomposition when there is enough labile C to support microbial demands (Haynes 1986).As microbial activity in this system is normally considered to be N limited (Hobbie and others 2002) and labile C compounds are abundant in this system due to conditions unfavorable to decomposition (Weintraub and Schimel 2003), increased N abundance can accelerate decomposition, especially of plant residues which have not yet lost their cellulose or been humified (Haynes 1986).
The dramatic acceleration of decomposition rates in deep soil OM is interesting because one would expect decomposition there to be more limited by low temperatures and high moisture levels than the layers above it and, therefore, nutrient limitation would play a lesser role.Why should the nutrient additions affect the lower layers so much more than the upper layers?Laboratory incubations of grassland soils have found that deep soils were more responsive to N and P addition than surface soils; however, they are also more sensitive to temperature (Fierer and others 2003).Therefore, the C loss may be a result of nutrient limitation or a change in the soil environment.Winter warming of soils, particularly deep soils, beneath the shrubs has been observed and increased CO 2 efflux during the winter is possible (Sturm andothers 2001, 2005).However, winter CO 2 efflux rates in shrub tundra range from 20-50 mg/m 2 /d (Sturm and others 2005), and with an average winter length of 235 days (NOAA/NCDC), therefore soil respiration can only account for 90-225 g C m )2 loss over the course of the experiment.Assuming graminoid tundra respires an equal or smaller amount, even a doubling of the maximum rate (which we might expect with 5-10°C warming) is insufficient to account for observed C loss rates in fertilized plots.Therefore, although temperature may play a small role, the primary reason for enhanced decomposition rates is probably the alleviation of N limitation on microbial activity.
CONCLUSIONS
Increased nutrient availability accelerated decomposition of labile pre-treatment organic matter.Adding 14 C measurement to changes in C stocks following fertilization allowed us to quantify changes in decomposition that were not observed from C inventory alone.The majority of C lost was not the youngest, most labile C, nor was it the oldest, most stabilized C, but instead, losses were dominated by the C in deep layers with an average age of approximately 300 years.Although not the most recalcitrant C in the soil, it has accumulated over several centuries and is being lost at greatly accelerated rates following fertilization, which could also prove a positive feedback to accelerated C loss under a scenario of warming and increased soil nutrient turnover.As 90% of the vast amount of C stored in arctic regions is in the soil (McKane and others 1997) and these regions are experiencing significantly increased temperatures (Serreze and others 2000), N turnover will increase, leading to additional losses of centuries-old soil C above those due to warming alone.These losses will offset, and perhaps exceed, expected increases in NPP.
Figure 1 .
Figure 1.Model design.Pool 1 represents the entire pretreatment C pool in the 0-5 cm organic horizon and the faster cycling C pool in the other layers.Pool 2 represents the slower cycling C pool.The New Pool contains C inputs following N + P addition.The numbers above represent NPP and SOM/DOC inputs in g C m )2 y )1 .Where there are two numbers, they represent pre-and posttreatment inputs, respectively.
Figure 2 .
Figure 2. (A) D 14 C values of bulk roots from Toolik.Overall there were no whole profile changes in root age, but in the litter fertilized roots were older (P = 0.10, F 1,4 = 4.40) and in the deep organic soil fertilized roots were younger (P = 0.04, F 1,4 = 8.81).The numbers in the bars represent the mean age (in years) of root C. (B) D 14 C values of bulk soil from Toolik.Treatment • depth effects were significantly different (depth P < 0.001, F 3,24 =65.92, treatment P = 0.960, F 1,24 =0.00, treatment*depth P = 0.036, F 3,24 =3.33, two-way ANOVA).The D 14 C value for atmospheric CO 2 in the year of sampling (2000) was 90&.
Figure 3 .
Figure3.Soil C contents.The graph on the left represents the unfertilized plots and the graph on the right represents the fertilized plots.Turnover times are shown in Table1.For the litter Pool 1 N + P's turnover time was 2 years, Pool 2 N + P's was 26 years, and New Pool's was 4 years.For the deep organic Pool 1 SS's turnover time was 50 years, Pool 2 SS's was 280 years, Pool 1 N + P's was 15 years, Pool 2 N + P's was 210 years, and New Pool's was 25 years.For the mineral soil organic Pool 1 SS's turnover time was 440 years, Pool 2 SS's was 3,000 years, Pool 1 N + P's was 10, Pool 2 N + P's was 4,000 years, and New Pool's was 40 years.
Figure 4 .
Figure 4. Modeled D 14 C curves for the O 0-5 cm horizon.Open squares represent the organic 0-5 cm horizon D 14 C in the control plots, open circles represent the fertilized plots, and Xs represent what D 14 C would be if turnover times for Pools 1 and 2 stayed the same and new C simply accumulated.The filled circle represents the measured fertilized value and the filled square represents the measured control value.To obtain the appropriate fertilized D 14 C value, we must assume some enhancement of decomposition of pre-1981 material.
|
v3-fos-license
|
2020-01-09T09:10:27.548Z
|
2020-01-02T00:00:00.000
|
212771462
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/12/1/373/pdf",
"pdf_hash": "9afda857c099b96c423790ddf6330f5341682cc6",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46452",
"s2fieldsofstudy": [
"Environmental Science",
"Economics",
"Political Science"
],
"sha1": "a3e11f2e9b07126ba0dbfeb9cc702cb7e559e764",
"year": 2020
}
|
pes2o/s2orc
|
Towards a Low-Carbon Economy: A Nexus-Oriented Policy Coherence Analysis in Greece
: the sustainable management of natural resources under climate change conditions is a critical research issue. Among the many approaches emerged in recent times, the so-called ‘nexus approach’ is gaining traction in academic and policy circles. The nexus approach presupposes the analysis of bio-physical, socio-economic and policy interlinkages among sectors (e.g., water, energy, food) for the identification of integrated solutions and the support of policy decisions. Ultimately, the nexus approach aims to identify synergies and trade-offs among the nexus dimensions. Concerning policy, the nexus approach focuses on policy coherence, i.e., the systematic identification and management of trade-offs and synergies between policies across sectors. This paper investigates the coherence between policies on the water-land-energy-food-climate nexus in Greece. The systematic analysis of policy documents led to the elicitation of nexus-related policy objectives and instruments. Then, the coherence among objectives and between objectives and instruments was assessed using the methodology proposed by Nilsson et al. A stakeholder (trans-disciplinary) orientation was adopted and the need to incorporate stakeholders’ recommendations as to policy coherence assessment was highlighted. Overall, the findings revealed that climate and food/agricultural policies represent critical future priorities in Greece by stimulating progress in other nexus-related policies (energy, water, land policies) and being positively influenced by them. promoting the sustainable use of water resources and objectives referring to land and food sectors; instruments concerning the development of RES infrastructures and objectives for the land sector that place emphasis on the protection of natural environment (e.g., protected areas, landscape and biodiversity); energy instruments and objectives considering the protection of water resources; climate instruments and the goal promoting natural gas use, and; two specific energy instruments (incentives for natural gas exploitation and high prices of renewable energy) and climate objectives.
Introduction
The design, analysis and implementation of policies regulating the terms and conditions under which contemporary spatial systems are developed is undoubtedly linked to economic prosperity, social cohesion and effective use of natural and human assets (e.g., water, land, labour and financial capital). The inherent complexity of such dynamic systems call for policies that incorporate all relevant dimensions likely to affect availability and access to resources. In this sense, policy integration has the potential to effectively eliminate existing gaps across different policy sectors, actors and scales of governance [1].
Resource efficiency, under the threat of climate change, represents a critical challenge, embedded in almost all environmental policies. Water availability and allocation, regulation of land uses, food security and energy production are among the most important factors affecting standards Plains Aquifer in terms of reducing production risks, establishing innovative strategies, increasing farmers' profits, etc. [14].
This paper aims to contribute to this fast growing body of literature by focusing on the assessment of water-energy-land-food-climate nexus policies in Greece in terms of their coherence both at the level of policy documents and in implementation. Accordingly, the paper investigates: a) the degree of interactions among identified nexus-critical policy objectives, b) the degree of interactions between identified nexus-critical policy objectives and nexus-critical policy instruments, c) the identification of the most influencing objectives and instruments as well as the investigation of the most influenced objectives, and d) the way such objectives and instruments should be considered during policy design so as to minimize trade-offs and exploit synergies. In this context, the nexus approach and the role of policies in its governance are briefly delineated. Then, the methodological framework adopted for policy analysis is presented. Thereinafter, the proposed framework is tested against a national level case study (Greece) and relevant results are described.
The Nexus Approach and the Role of Policies in the Nexus Governance
The nexus approach has been broadly adopted across disciplines in order to investigate complex and non-linear interconnections among the components of socio-economic/physical systems. For example, water-energy-food nexus and their interlinkages were considered in the development of a decision support tool dealing with decisions for water infrastructure investments under complex socio-economic and climate change conditions [15,16] explored the interdependencies among water, energy and food in a study where the increasing demand of resources and the respective availability under climate change conditions were analysed. Some other cases include: the development of a modelling approach to support policy decisions having to do with the nexus food-energy-water [17], the analysis of water-energy-land-food nexus to improve resource efficiency [18] and the integrated modelling of the food-energy-water nexus with the support of analytical tools [19].
Among the main advantages of the nexus approach is its systemic thinking; not only biophysical but also socio-economic considerations are taken into account. Trade-offs and synergies are sought under a trans-disciplinary approach. This means that the different components of a system are not analysed as independent entities but as interlinked elements which affect each other [20]. Putting pressures on one component may affect the others to a different degree, and as a result, integrated solutions for managing the evolution of the nexus are needed. Water needs energy for treatment, pumping and distribution while energy needs water for thermal-plant cooling and hydropower. Energy production by different types of fuel (coal, oil, gas, solar, wind, etc.) has different demands for water, results in different levels of Greenhouse Gas (GHG) emissions and affects climate change in a variable way. Land use, be it cropland-forest-wetland-or artificial, exerts different demands on resources and since increased demands of one resource results in increased demands of other resources, one can see how important it is to recognize direct and indirect interlinkages of resources. Due to this complex 'tree' of interrelations [6], policies that usually target a single sector/component, tend to have cross-sectoral implications, even though it is not so obvious at first.
The adoption of a nexus approach is therefore increasingly recommended for: i) the analysis of dynamic environmental systems characterised by complex inter-relations among their components and; ii) the design of policies with a long-term time horizon. In addition, a nexus approach is more appropriate when the focus is on the management of scarce/non-renewable resources and the establishment of a low-carbon economy under climate change conditions [5].
From a policy perspective, the governance of the nexus is characterized by numerous interacting, sectoral policies establishing the institutional framework for the sustainable development of the different nexus components. A key concept when talking about interactions among nexus-relevant policies is that of policy coherence [21,22]. Coherence is an attribute of both policy content and policy process. In terms of policy content, it concerns policy objectives and instruments, and; focuses on the exploitation of synergies as well as on the management of trade-offs within and across policy areas and spatial scales. Coherence of policy goals and instruments is one of the pillars of the nexus governance as it aims for consistency across different sectors, being the ultimate goal for the efficient management of natural resources. As pillar of the nexus governance, policy coherence is pursued throughout the entire policy making process, from design to implementation. This means keeping a continuous focus on the identification/exploitation of potential synergies and the mitigation of possible conflicts across nexus sectors during policy implementation [23,24].
Methodological Approach
The methodological approach adopted in this paper for assessing policy coherence is articulated in a number of steps and builds on the approach proposed by Nilsson et al [24,25] for assessing coherence among Sustainable Development Goals (2016). Figure 1 illustrates the methodological approach. In detail:
−
Problem identification / Key research questions: The problem is identified and key research questions are formulated. Problem and related research questions define the boundaries of investigation and the nexus components involved. For example, if the problem is water allocation in a region where there is a power plant, irrigated farm land and populated areas, the nexus object of investigation is that of water-energy-agriculture. − Stakeholder mapping: The participatory dimension is emphasised through the mobilization of stakeholders. Stakeholders interested in or influencing the nexus are engaged in the identification and analysis of the relevant policies. Their role (formal or informal) during policy making and policy implementation is explored through in-depth semi-structured interviews and knowledge elicitation workshops. Relevant stakeholders are invited to contribute to policy mapping and policy coherence assessment. − Policy mapping -Nexus goals and instruments: A policy inventory is conducted. Policy goals and policy instruments, relevant to the problem being investigated, are identified across the nexus sectors. Stakeholders support the process with their experience and expertise (interviews, stakeholders' workshop). − Identification of nexus-critical objectives and instruments: The most critical policy objectives and policy instruments are determined. A 'nexus-critical objective' is highly relevant for the nexus issues investigated and has a significant number of interactions with other objectives taken into consideration [26]. A 'nexus-critical instrument' is highly relevant for the nexus issues investigated and has a meaningful number of interactions with the nexus-critical objectives [26]. Stakeholders contribute to the identification of critical objectives and instruments. − Policy coherence assessment/Validation by stakeholders: Experts conduct the assessment of coherence (qualitative assessment) among objectives and between objectives and instruments using the approach developed by Nilsson et al [25]. Stakeholders are then invited to validate the results. In practical terms, nexus critical objectives and nexus-critical instruments are identified per each nexus component. Then, two separate impact matrices are built where the interaction between pairs of objectives and objectives/instruments are scored based on a simple linear scoring scale developed by Nilsson et al [25]. Such scoring scale defines the level where the implementation of one objective affects, positively or negatively, the pursuit of another, and; the level where the implementation of an instrument affects, positively or negatively, the progress of an objective. Cross-sectoral cooperations and possible competing priorities are revealed. In general, negative scores identify conflictive interactions while positive scores indicate synergistic interactions. The scale follows a seven-point typology [27] where each point indicates the degree of positive or negative interaction existing between a pair of objectives / objective-instrument. The meaning of each value of the sevenpoint typology is presented in Figure 2. [25,27] An important dimension of the adopted approach is the active involvement and participation of stakeholders in almost all stages of policy analysis and policy elaboration. At this point, a number of plausible questions arise: Why stakeholders are engaged in a policy analysis process? Which is the role of stakeholders in such a process? Who are the stakeholders that should be involved?
Stakeholders' analysis is a widespread technique usually adopted by public and private organisations dealing with policy assessment [28]. The majority of decision makers underline the need for taking into consideration stakeholders' views as they will be affected by the respective policies while a critical number of them, may either encourage or hamper the implementation of a policy according to their available means of power [29]. In 1986, [30] referred to stakeholders' participation in policy making by introducing the 'methodology of policy exercise', a preparatory activity through which policy goals and relative strategic options are collaboratively identified. Such kind of exercise may be used as a preliminary stage prior to the implementation of policy decisions or as a tool to evaluate the performance of existing policies. Moreover, the development of broad synergies during a policy analysis process gives stakeholders the chance to express their preferences, clarify possible misunderstandings, cover several knowledge gaps and shed light on issues that decision makers may not keep in mind [31].
Furthermore, participatory planning is a tool that constantly gains ground in the field of environmental policy design and assessment [32] especially in cases concerning the future development of complex systems and the effective management of natural resources. Climate change is a relevant example indicating the need for the adoption of an alternative policy analysis model. Such model would place emphasis on the participation of scientists and stakeholders during the formulation of climate policies. Similar practices are also followed for the management of water [i.e., Water Framework Directive (WFD) 2000/60/EC] and energy resources, land use regulations, etc.
In the case of Greece, the nexus approach led to the identification of five critical nexuscomponents: water, land, energy, food and climate. These components were selected based on the challenges that must be addressed in the near future, concerning: the reduction of GHG emissions, the reduction of coal and oil use, the penetration of RES in the national energy mix, the production of qualitative agricultural and dairy products, the rational management of water resources especially in case of irrigation, the mitigation of climate change impacts, the explicit regulation of land uses and the reduction of land use conflicts. Agricultural and tourist sectors were also considered as they put extra pressures on all five nexus components, being the dominant economic sectors in Greece. The key policy issues at stake concern: water resources management; penetration of Renewable Energy Sources (RES) to energy production; land use allocation; impacts of water, energy and land policies on food and energy production patterns; agricultural and tourist development under climate change conditions. A detailed policy analysis followed and shed light on the most critical policy priorities as well as on the level of coherence among policy objectives and between policy objectives and policy instruments.
Problem Identification and Research Questions
Greece is located in South-Eastern Europe. Its area is approximately 131,957km 2 consisting of 13 administrative units. Its population is estimated close to 10.8M inhabitants. The major pillars of its economy are agriculture and tourism while the main priorities for its future development include: the economic recovery; the increase of resilience against climate change, and; the establishment of a low-carbon economy. A thorough analysis of the Greek legislative framework and relevant literature [33][34][35][36][37][38][39][40][41][42] complemented with discussions with the engaged stakeholders, led to the identification of relevant issues which shape the specific Greek context and guided the selection of the respective nexus sectors. Such issues are:
−
Water scarcity and droughts that will be further exacerbated by climate change. − Spatial and temporal water availability and demand. Consequently, the nexus sectors involved are water, energy, food/agriculture, land and climate and the policy-related issues to be investigated concern: a) water resources efficiency, especially in the case of agricultural and tourist uses, b) regulation of land uses, c) sustainable production of food, d) low-carbon energy transitions, and e) climate change adaptation. Accordingly, the research questions for the Greek case were identified:
−
How water and energy policies affect agri-food production and the future development of tourism? − What kind of policy co-operations should be established in order to eliminate water losses in the agricultural sector, support the production of sufficient food and boost the development of a low-carbon economy? − Which are the most efficient adaptation and mitigation practices for combating water scarcity and strengthen agricultural production under climate change conditions?
Stakeholders Mapping and Engagement
Stakeholders' engagement was based on the role of stakeholders during decision making and their specific interests as to the nexus-related policies. Stakeholders, relevant to the specificities of the Greek case study, were involved in policy analysis and participated in: a) the identification of nexuscritical objectives and instruments and b) the validation of policy coherence assessment. They also enriched the specific analysis by mentioning issues related to policy implementation (arrangements, conflicts, trade-offs) and highlighting the need such issues to be dealt and resolved during the design of future improved policies.
About twenty stakeholders (individuals or/and groups) involved in the entire process and supported the selection and analysis of policy papers, the identification of the main nexus challenges in Greece as well as the assessment of coherence among the nexus-related policies. Such stakeholders were representatives of public organisations (e.g., Ministry of Environment and Energy, Ministry of Tourism, Ministry of Foreign Affairs, Public Power Corporation S.A, etc.), private agencies (e.g., the Bank sector, agri-food businesses, Photovoltaic Energy Producers Association, etc.), NGOs [e.g., Greenpeace Greece and World Wildlife Fund WWF Greece] and academic/research institutes (e.g., National Technical University of Athens, University of Thessaly, etc.). They enriched the analysis by offering additive knowledge, experience and expertise emanating from their scientific and professional background. The interaction between stakeholders and the research team took place through the organisation of face-to-face interviews, e-surveys and a workshop where all of them had the chance to meet each other and discuss about the management of the several nexus challenges.
Identification of Policy Objectives and Policy Instruments
An inventory of nexus-related policy documents was firstly generated. It included policies concerning all nexus components relevant to the case study (water, energy, land, food/agriculture, and climate). Tourist policy documents were also considered due to the substantive contribution of tourism to the national GDP and its relation to the research questions. The choice of policies was guided by the research problem and research questions. Specifically, the policies selected per each nexus sector are related to the following issues: Water: Protection and sustainable use of surface water and groundwater, mitigation of pollution in natural ecosystems. − Food: Food production, food and fodder quality, preservation of traditional and scarce seeds.
Regarding agricultural and tourist policies, emphasis was given on: the future development and resilience of agricultural and tourist sectors against climate change impacts; the limitation of pesticides' use; the future development of livestock; the management of agricultural land and pastures; the promotion of tourist entrepreneurship and; the establishment of alternative tourist activities.
Subsequently, a content analysis was performed to identify nexus-critical policy objectives and nexus-critical policy instruments. Policy objectives represent the expectations of administration as to the development of several sectors. They reflect strategic priorities and main future directions pursued for each sector. Policy instruments are tools/techniques supporting the achievement of policy objectives [26,43].
The identification of policy instruments was based on a common distinction in organizational, authoritative (market and non-market), financial and informational instruments. In case of environmental policy instruments the distinction was broke down into the following components [44, This categorization was used as a guidance to understand and organise the different policy instruments. However, for the specific purpose of this paper a systematic classification of policy instruments was not conducted.
The identification of critical objectives and instruments was also based on literature review and experts'/stakeholders' opinions. In particular, stakeholders contributed to identify specific objectives and instruments addressing environmental issues and playing an important role in the sustainable management of resources under climate change conditions. The final list of nexus-critical policy objectives and instruments is presented in Table 1; Table 2.
Policy Coherence Assessment
The assessment of policy coherence was accomplished through: a) the assessment of interactions among nexus-critical objectives and b) the assessment of interactions between nexus-critical objectives and nexus-critical instruments. Stakeholders supported policy coherence assessment by identifying positive and negative influential relationships. Firstly, the research team conducted a qualitative assessment of the relevant interactions and then the results were presented to stakeholders for validation. Stakeholders, according to their expertise on the nexus sectors analysed, proposed possible amendments/corrections. Researchers went back to the relevant impacts matrices and adjusted the scores accordingly. A general discussion followed focusing on the contrast between coherence on paper and actual conflicts when it comes to policy implementation. Stakeholders reported several divergences, not currently considered in existing policy papers but needed to be addressed in future policies.
Interactions among Nexus-Critical Objectives
The assessment of interactions between pairs of objectives (listed in Table 1) was conducted using the scoring scale (range from −3 to +3) proposed by Nilsson et al [25] (see Figure 2). The results of the scoring were plotted on an impact matrix (Table 3). It is recalled that negative values indicate divergences while positive values indicate convergences. Each cell of the matrix denotes the type of interaction between two objectives by including the respective value. The goal of such a table is the assessment of the influence that objectives in rows have on objectives in columns. In other words, the assessment was based on the question: 'How does progress on objective x (in row x) influences progress on objective y (in column y)'? To answer these questions, two issues were explored: a) if the interaction between two objectives is negative or positive and b) the degree of interaction according to the values of the seven-point scale. The total influence that an objective x exerts on all other objectives is defined by the row-sum. The column-sum indicates the total influence that an objective y receives by the rest. Each value of the seven-type scoring scale is represented by the respective colour. Overall results show that the majority of interactions are positive (indivisible, reinforcing, enabling), entailing a rather satisfactory level of consistency. This means that progress on most objectives positively affects progress on the rest while a high row-sum indicates strong synergetic efforts. Most synergies exist among objectives falling within the same nexus domain as they are characterised by a high level of complementarity. Synergies were also identified between energy and climate goals; food/agriculture and land goals, and; water and climate goals. Such synergies are fully justified as there are strong inter-relations and complementarities between climate and energy sector, land uses and agricultural development, availability of land for food production and water resources management under climate change conditions. Energy policies pursue the implementation of practices (e.g., adoption of RES) that will contribute to the reduction of GHGs; land use policies place emphasis on the protection of crops and agricultural land, and; water policies promote the need for the efficient management of water resources due to climate change.
However, there are also negative interactions (constraining, counteracting and cancelling) among objectives. A cancelling one exists between objectives C1 and E5, concerning the reduction of GHG emissions and the promotion of natural gas respectively. This is due to the fact that the extensive use of natural gas entails the release of significant GHG emissions in the atmosphere. Counteracting and constraining interactions exist also between: objective E5 and other climate objectives referring to climate change adaptation and mitigation of its impacts; water and food/agriculture objectives, and; tourist and land/agriculture objectives. Agriculture needs water for irrigation while, it also uses pesticides which affect water resources quality. Furthermore, land use conflicts exist among the sectors of tourism, industry and agriculture.
The objective exerting the most positive influence is C3 'Combating climate change impacts in the sectors of agriculture, tourism, water, food and land uses' (Row-sum: 38). Objectives C2 'Increase climate change adaptation and resilience' (Row-sum: 36) and F1 'Sustainable development of agricultural sector' (Row-sum: 31) follow. Such results are fairly reasonable as climate policies, dealing with the sustainable management of climate change impacts, are expected to strongly affect all nexus sectors. A crucial prerequisite for the efficient use of resources and the evolution of the nexus sectors is their adaptation to the new conditions imposed by climate change, especially in case of vulnerable regions. Moreover, agriculture is among the main sectors supporting Greece's national GDP so, policies aiming at its sustainable future development are of utmost importance. Thus, we may conclude that climate-food/agriculture objectives are consistent in a satisfying degree and each of them triggers the effective achievement of the rest.
The objective exerting the least positive influence is F4 'Establishment of strict terms and conditions on pesticides use' (Row-sum: 1). Objectives W3 'Protection of aquatic systems and reduction of pollution' (Row-sum: 2) and E5 'Promotion and extensive use of natural gas' (Row-sum: 5) follow. Considering objective F4, the low-degree of positive influence is due to the fact that pesticide use entails negative impacts on water, land and food even in the case of rational use. As for aquatic systems, their protection puts constraints to the accomplishment of objectives related to the development of agriculture, industry and tourism. Finally, as already mentioned, the extensive use of natural gas counteracts the efforts aiming at the establishment of a low-carbon economy due to GHGs derived from its exploitation.
A more in-depth analysis revealed pairs of objectives that are strongly coherent. Some indicative cases are: Such inconsistencies are mainly caused due to the negative impacts that the accomplishment of an objective may have on the achievement of another one. For example, the extensive use of natural gas hampers the reduction of emissions; the use of pesticides affects quality of aquifers while, the development of agriculture puts constraints to the industrial sector in terms of land use and pollution of resources.
Except for the row-sums, there are also column-sums defining the degree that objectives are influenced by the rest. A high column-sum implies that an objective is strongly influenced by other objectives (positive influence). According to the 'vertical' aggregations the most positively affected objective is C2 'Increase climate change adaptation and resilience' (Column-sum: 45). Objectives C3 'Combating climate change impacts in the sectors of agriculture, tourism, water, food and land uses' (Column-sum: 36), F1 'Sustainable development of agricultural sector' and L2 'Sustain a wellbalanced national economy and strengthen competitiveness' (Column-sum: 32) follow. A significant conclusion is that the 'most positively affected objectives' are the same with the 'most positively affecting objectives'. Thus, issues related to: climate change adaptation; sustainable development of agriculture, and; combating climate change impacts, not only affect positively the achievement of other objectives but their accomplishment is also supported by the rest.
There are also two objectives the accomplishment of which is negatively affected; F4 'Establishment of strict terms and conditions regarding pesticides use' (Column-sum: −5) and L4 'Spatially balanced distribution of industry' (Column-sum: −1). Such negative scores are mainly due to the impacts that pesticides and industrial activities exert on resources (especially water and land). Consequently, the implementation of environmental-friendly policies puts strict constraints to both pesticides use and development of industrial activities.
At the top of the list of least positively affected objectives are: E5 'Promotion and extensive use of natural gas' (Column-sum:3), W5 'Establishment of an updated water pricing system regulating water uses in several sectors (agricultural, industrial, domestic, touristic, commercial, etc.)' (Columnsum: 5), F3 'Sustainable development of livestock (determination of preconditions)' (Column-sum: 6) and F5 'Sustainable development of aquaculture' (Column-sum: 7). Most of these objectives concern sectoral policies and embody a more specific orientation. This is the reason why they are mainly affected by objectives belonging to the same nexus sector.
Interactions between Nexus-Critical Instruments and Nexus-Critical Objectives
In a similar way, interactions between nexus-critical objectives and nexus-critical instruments were analysed. The total number of selected instruments was 43 (see Table 2) and an impact matrix including the evaluation of policy instruments vs. policy objectives was built ( Table 4). The scoring scale was similar to the one previously described. In this case, negative scores mean that the implementation of a policy instrument hampers the achievement of an objective (conflict) while positive scores mean that the implementation of a policy instrument reinforces the achievement of an objective (synergy). The degree of interaction is determined by the respective values of the sevenpoint scale.
As expected, this second evaluation indicated that instruments and objectives referring to the same nexus sector are compatible with each other. Also and apart from two exceptions, policy instruments concerning the sector of energy critically support the achievement of objectives related to climate protection and vice versa. Instruments concerning the efficient and rational use of water fairly support the accomplishment of objectives associated to climate and vice versa. Finally, instruments referring to land positively affect objectives promoting the sustainable development of agricultural and tourist sectors. The main conflicts have been detected between: instruments promoting the sustainable use of water resources and objectives referring to land and food sectors; instruments concerning the development of RES infrastructures and objectives for the land sector that place emphasis on the protection of natural environment (e.g., protected areas, landscape and biodiversity); energy instruments and objectives considering the protection of water resources; climate instruments and the goal promoting natural gas use, and; two specific energy instruments (incentives for natural gas exploitation and high prices of renewable energy) and climate objectives. Table 4. Instruments vs. objectives impact matrix.
The assessment of coherence between policy objectives and policy instruments indicated that the most positively affecting instruments are: Lf 'Subsidies, supporting specialisation in the several productive sectors' (Row-sum: 43), Cd 'Use of indicators (e.g., atmospheric concentrations of GHGs, vulnerability indices, indices of extreme events) for estimating climate change impacts' (Row-sum: 42) and Ce 'Organisation of consultation meetings and participatory workshops for enhancing awareness and public dialogue regarding climate change' (Row-sum: 38). There is also an instrument, Eh 'High prices of renewable energy', exerting negative influence (Row-sum: −7) and counteracting two energy and three climate objectives. Among the least affecting instruments are: Wc 'Extensive use of technologies that: a) measure water pollution b) detect sources of pollution (Row-sum: 1), Le 'Establishment of funding schemes that support renovation of social infrastructures (hospitals, nursing homes, educational institutions)' (Row-sum: 6) and Eg: Incentives (i.e., low prices) for further exploitation and use of natural gas (Row-sum: 6).
Similarly to the respective results derived from the objectives vs. objectives cross-impact matrix, policy instruments having to do with climate are among the most positively affecting ones. Landrelated instruments are also accompanied by high row scores. Such instruments support the estimation of climate change impacts, the implementation of adaptation and mitigation practices as well as the reinforcement of subsidies that encourage specialisation, the multi-scale spatial organisation of land and the regulation of land uses. They are general enough in order to support the achievement of a significant number of objectives referring to various nexus sectors. On the other hand, more specific instruments support sectoral-focused objectives so their overall influence is lower. It should be mentioned that the instrument concerning high prices of renewable energy incorporates a negative influence as it hampers further exploitation of RES and accordingly the reduction of emissions.
Negative interactions between instruments and objectives entail that the implementation of an instrument may put constraints on, counteract or even cancel the achievement of an objective. According to the results derived from the assessment of coherence between objectives and instruments, the strongest negative interactions (cancelling interactions) occur between: Column-sums indicate the degree to which the progress of each objective is affected by the implementation of each instrument. Climate and land objectives are again among the most positively affected [e.g., C2 'Increase of climate change adaptation and resilience' (Column-sum: 68) and L1 'Promote sustainable spatial integration so as to eliminate spatial inequalities' (Column-sum: 60)] whereas the achievement of objective E5 'Promotion and extensive use of natural gas' is negatively affected (Column-sum: −4) by the rest.
Stakeholders' Validation
The assessment of policy coherence proceeded with the validation of results by the involved stakeholders. Stakeholders revised the two impact matrices and updated the scoring of inter-relations among policies across nexus sectors. The proposed updates/amendments were incorporated in the relevant impact matrices. Other issues discussed with stakeholders concerned: current policy gaps and future strategic options; policy implementation, challenges and opportunities; existing synergies and trade-offs among policies.
Stakeholders offered additional information concerning coherence in terms of formal and informal arrangements when it comes to policy implementation. In this way, they shed light on policy gaps between theory (policy papers) and practice (policy implementation) that should be addressed in the future. They mentioned that when designing policies, co-operations among public and private organizations, NGOs and academic institutions should be explored in order to identify the factors that either create conflicts or strengthen the establishment of synergies with respect to the accomplishment of nexus-critical objectives. Policy arrangements taking place at implementation level, supporting and limiting factors should be carefully examined and incorporated in future policies. In this context, stakeholders reported a number of cross-sectoral committees, promoting synergetic actions in order to confront problems revealed during policy implementation by: limiting divergent objectives and seeking compromising solutions. Such committees are created between Ministries and Academic Institutions (e.g., Ministry of Environment and Energy/National Technical University of Athens), within Ministries (e.g., Ministry of Environment and Energy/Ministry of Tourism), Ministries and businesses (e.g., Ministry of Environment and Energy/Hellenic Association of Photovoltaic Energy Producers), Ministries and NGOs, etc. In some cases the collaboration is successful; in other cases, the final outcome is negative due to discrepancies or an inability to compromise. According to stakeholders', current conflicts mainly refer to the management of geothermal springs, the allocation of the available water resources, the management of land use conflicts (especially between agriculture and livestock) and the use of lignite for energy production. They also mentioned that such issues are not currently addressed in the relevant policy papers but should be urgently clarified in future policy papers. Policies aiming at mitigating such trade-offs are under discussion or ready to be implemented in the near future such as the water pricing policy and the new special policy framework for the organisation of the tourist sector.
Thus, the formulation of arrangements enhances the undertaking of participative actions among stakeholders and reinforces consultancy and transparency during policy assessment or policy design. Finally, among the enabling and hindering factors determining the successful or unsuccessful outcome of an arrangement are mainly common or conflicting plans/agendas; goals/perspectives; interests; profits; exchange of experiences and expertise, and knowledge diffusion.
Discussion
In this paper the coherence among water-energy-land-food-climate policies under a nexus rationale was investigated in the case of Greece. The adopted methodological approach placed emphasis on the exploration of possible options to better integrate policies across sectors for the sustainable development of the nexus components. The assessment of policy coherence revealed critical interactions among nexus-critical objectives and between nexus-critical objectives and nexus-critical instruments. Policy priorities were elicited based on: a) the degree of influence that each objective exerts/receives to/by the rest and b) the influential inter-relations between nexus-critical objectives and nexus-critical instruments. A stakeholder-engagement orientation was also incorporated, underlining the importance of taking into consideration specific knowledge and expertise.
Policy coherence represents an essential step of the applied approach and contributed to the enrichment of policy analysis with knowledge about problems arising at both policy document and implementation level. Existing conflicts, synergies, trade-offs, negotiations were investigated and inconsistencies at a practical level were explored. It should be mentioned that the assessment of policy coherence is a rather complex and time-consuming endeavour as the amount of information that needs to be studied and evaluated is massive.
In this study, a mix-method approach was adopted where multiple sources of information were analysed so as to allow data triangulation. Specifically, a systematic process was followed, including: literature study on nexus interactions, content analysis of policy documents, experts' evaluation and investigation of stakeholders' views. The contribution of stakeholders was particularly important to understand what is feasible in practice. Stakeholders indicated existing conflicts and synergies as well as possible ways to deal with such conflicts, synergies and trade-offs at the implementation level. They were involved in almost all stages of policy analysis and in some cases they were the only source of information in order to analyse a number of critical issues. Their engagement was an integral and necessary part of the process as the generation of useful and valid outcomes presupposed their collaboration.
The implementation of the proposed methodological approach in the case of Greece effectively guided the assessment of policy coherence by offering a systematic way for investigating the complexity of the nexus issues. In particular, it supported the organisation of the several tasks to undertake and shed light on issues that researchers did not have in mind such as the exploration of arrangements and trade-offs taking place when policy conflicts occur.
The outcomes showed that the highest degree of coherence was attained when the policies referred to the same nexus sector. However, significant positive policy interactions exist also among policies concerning different nexus sectors, e.g., climate and energy policies. Sectoral (vertical) policies exert the lower level of positive influence to the rest as they mainly support the achievement of more specific policy objectives. Climate and food/agriculture objectives embody the highest level of positive influence on the rest, being simultaneously positively affected by a high number of objectives. Climate change adaptation and resilience, combating climate change impacts and sustainable development of agriculture are the most influencing objectives. This is reasonable as in the forthcoming years many regions in Greece are going to experience the impacts of climate change. Thus, progress of policy objectives related to the confrontation of such impacts, the enhancement of resilience and the reinforcement of adaptation ability entails a strong positive effect on the sustainable management of all other nexus sectors, especially agriculture and food, under climate change conditions.
Instruments concerning: a) use of indicators for estimating climate change impacts and the respective vulnerability, b) the encouragement of specialisation in all productive sectors and c) the undertaking of participatory actions for enhancing public awareness as to climate change are the most influencing ones; those that support positively the progress of the majority of objectives. Regarding land use sector, the promotion of specialisation and the regulation of land uses will support the reduction of conflicts among nexus sectors, enhancing also the creation of complementarities among productive sectors. The successful implementation of climate instruments will set the conditions under which several activities, especially agriculture and tourism, will be sustainably developed in the future.
Conflicts mainly exist among objectives referring to the protection of water resources and the development of industry and aquaculture; the rational use of surface water and the production of energy from hydropower; the development of agriculture and the de-centralisation of the industrial sector (competitive land uses), and; agricultural development and the establishment of tourist activities in rural regions (competitive land uses). Such conflicts are expected to be mitigated in the near future through the official regulation of land uses and the strict implementation of the WFD 2000/60 at national level. There is also a pair of cancelling objectives; those concerning the extensive use of natural gas for energy generation and the reduction of GHG emissions respectively. Such cancelling interaction stresses the negative effects that natural gas exploitation has on emissions' release and the need to adopt alternative energy sources supporting the limitation of emissions. Instrument Eh 'High prices of renewable energy' has a negative row score as it hinders the extensive use of renewables and consequently the accomplishment of objectives aiming at the establishment of a low-carbon economy. Finally, similarly to the case of objectives vs. objectives interactions, cancelling inter-relations occur between instruments enabling the use of natural gas and objectives concerning climate change adaptation and mitigation of its impacts, and reciprocally; instruments supporting climate change adaptation/mitigation and the objective promoting the extensive use of natural gas.
Generally, the overall level of coherence is satisfied but our analysis highlights the need for achieving a higher level of consistency in order to successfully establish a low-carbon economy that will be based on the efficient use of resources. This paper provides the necessity for exploring policy coherence in order policy gaps either in policy documents or at policy implementation level to be revealed. In the case of Greece, stakeholders mentioned several divergences between theory and practice. Such divergences concern conflicts and trade-offs arising during policy implementation but not anticipated/forecasted in existing policy papers. Some representative examples include conflicting water uses and arguments on the management of geothermal springs and on lignite use for energy production. Such conflicts are accompanied by the requirement to be effectively addressed in future policies.
Conclusively, compromised policy solutions call for the adoption of an integrated nexus orientation where interlinkages and interactions among the nexus components are fully taken into account during policy design and policy implementation. In this context, results emanating from policy coherence assessment may be used as a guide in order improved policies to be designed, dealing with the observed shortcomings and addressing possible inconsistencies. In other words, such kind of analysis resembles a learning process supporting the institution of improved nexuscompliant policies that will be better able to cope with conflicts and trade-offs.
|
v3-fos-license
|
2021-10-17T15:10:26.027Z
|
2021-10-15T00:00:00.000
|
239013512
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://cepsj.si/index.php/cepsj/article/download/1118/532",
"pdf_hash": "1ab3b571726dd678f708db338279751f90069ecd",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46454",
"s2fieldsofstudy": [
"Education"
],
"sha1": "b799b93e56a2737376a72372cf5fe21d737fda83",
"year": 2021
}
|
pes2o/s2orc
|
Effective Physical Education Distance Learning Models during the Covid-19 Epidemic
• The Covid-19 epidemic has had a strong impact on the implementation of the entire educational process due to the closure of public life and schools. Physical education (PE) teachers were faced with the challenge of conveying at a distance the learning content that they would otherwise teach in the sports hall. Our research aimed to determine which PE distance learning models proved to be the most effective during the epidemic, resulting in a high level of pupils’ activity despite participation from home. In the process of data collection, we included 33 PE distance learning lessons at the lower secondary level, where six pupils (3 girls and 3 boys) wore accelerometers in each lesson (n = 198 pupils). The results showed that the most effective model was the flipped learning teaching model, where pupils were given an overview in advance of the different forms of teacher video recordings. Then they also actively participated with their ideas in the performance of the online lesson. A statistically significantly less efficient version of the flipped learning teaching model had prepared interactive assignments and games. This was followed by a combination of online frontal teaching with station work and frontal teaching. The least effective was independent work carried out by the pupils according to the instructions prepared by the teacher. Although the two flipped learning teaching models were the most effective in terms of exercise intensity, it is very difficult to implement them in practice because they require too much teacher time.
Effective Physical Education Distance Learning Models during the Covid-19 Epidemic
Tanja Petrušič* 1 and Vesna Štemberger 2 • The Covid-19 epidemic has had a strong impact on the implementation of the entire educational process due to the closure of public life and schools.Physical education (PE) teachers were faced with the challenge of conveying at a distance the learning content that they would otherwise teach in the sports hall.Our research aimed to determine which PE distance learning models proved to be the most effective during the epidemic, resulting in a high level of pupils' activity despite participation from home.
In the process of data collection, we included 33 PE distance learning lessons at the lower secondary level, where six pupils (3 girls and 3 boys) wore accelerometers in each lesson (n = 198 pupils).The results showed that the most effective model was the flipped learning teaching model, where pupils were given an overview in advance of the different forms of teacher video recordings.Then they also actively participated with their ideas in the performance of the online lesson.A statistically significantly less efficient version of the flipped learning teaching model had prepared interactive assignments and games.This was followed by a combination of online frontal teaching with station work and frontal teaching.The least effective was independent work carried out by the pupils according to the instructions prepared by the teacher.Although the two flipped learning teaching models were the most effective in terms of exercise intensity, it is very difficult to implement them in practice because they require too much teacher time.
Introduction
The Covid-19 coronavirus epidemic broke out in the Chinese province of Hubei in late 2019 but spread rapidly to many countries around the world in the first half of 2020 due to high infection rates (Velavan & Meyer, 2020), leading to the lockdown of public life and subsequent school closure (Petretto et al., 2020;Viner et al., 2020); Slovenia was no exception.Even though children were exposed to a significantly low risk of developing the disease or long-term complications after recovering from the infection (Qiu et al., 2020), they were deemed to be carriers of the virus to more vulnerable groups; so, their education was fully transferred to the online environment (Quezada et al., 2020).Such a change in work methods had a strong impact on the implementation of the entire pedagogical process.In a very short time, Physical Education (PE) teachers were compelled to find new ways to teach PE-related learning content, which otherwise takes place in a sports hall, at a distance (Varea & Gonzáles-Calvo, 2020), as effectively as possible (achieving the result of moderate and high-intensity physical activity of the pupils).Adaptation was especially challenging with types of content that require a large space (sports hall) and forms of in-class grouping that encourage socialisation among pupils (e.g., group work) (Richards et al., 2020).Distance learning with knowledge and use of various technologies enabled teaching of almost all PE-related content; thus, it was only necessary to find the most suitable approach and form of within-class grouping for each type of learning content to enable pupils to achieve their learning goals with a high activity level while maintaining a positive learning environment during PE distance learning lessons (Filiz & Konukman, 2020).Teachers were able to teach PE-related content at a distance through online classes via live-streaming, recorded videos, movement diaries, assignments for pupils, online materials with lessons with practical and theoretical content, online questionnaires or distance-learning programmes with suggestions for physical activity from home (ibid.).In addition to selecting a suitable digital tool based on learning content for distance learning, teachers also had to select a suitable form of within-class grouping.Teachers usually only use one form of within-class grouping per one hourly PE lesson when teaching in the sports hall, but to increase the pupils' activity, they could also use a combination, for example, the first part of the frontal lesson work (in queues) and then group work (at stations) (Videmšek & Pišot, 2007).Teachers can also use the same combination to teach PE at a distance on online platforms such as Zoom, Microsoft Teams, Blackboard and Canvas (Guraya, 2020), which allow pupils to be divided into 'rooms' or 'groups' .
Additionally, a different virtual approach to teaching, namely the flipped learning teaching model (Chick et al., 2020), may also improve the pupils' learning experience.In comparison to the traditional teaching model, where the focus is on the teacher and their explanations (Betihavas et al., 2016), the flipped learning teaching model focuses on the pupils' ability to acquire new knowledge and understanding on their own through mutual collaboration (Sohrabi & Iraj, 2016).In this form of work, pupils receive material in advance in the form of various videos of practical performances, recorded lessons or short assignments, followed by a short explanation or review of the content during the lesson itself.Afterwards, the teacher divides the pupils into smaller groups.These groups discuss their newly acquired knowledge and jointly work out a specific problem-solving task related to the learning content received in the pre-prepared material (Guraya, 2020).Teachers can use the flipped learning teaching model for PE-related content under regular conditions, in which the pupils' lessons and group work take place effortlessly in a sports hall without social distancing, as well as for distance teaching, in which pupils receive pre-prepared material via online classrooms, e-mail and so on, working in small groups, during the lessons conducted via the above-mentioned online platforms (ibid.).
Distance learning overcomes the limitations of space and time (Buschner, 2006;Kooiman, 2017, Mohnsen, 2012;Mosier, 2013;Rhea, 2013); thus, not much research exists on effective approaches to teaching PE at a distance to achieve a sufficiently high moderate and high-intensity of pupils' activity during the lessons themselves.Moreover, as the lockdown of public life during the first wave of the Covid-19 epidemic happened completely unexpectedly, at that time, most teachers stepped into this new field of distance learning PE teaching unprepared.Due to the inability to predict the duration of such a work form (depending on the country's epidemiological picture), it is apparent that PE teachers need help in preparing effective distance learning lessons based on recent research findings that are specifically related to the current state of the epidemic, with the same restrictions and educational opportunities.
Therefore, in this research, we posed the following research question: • Which PE distance learning models are most effective during the Covid-19 epidemic, resulting in a high level of pupil activity despite all limitations and participation from home?
Method
The research was conducted through an action research approach.
To determine which PE distance learning models are the most effective during the Covid-19 epidemic for pupils to achieve high levels of activity during the lessons, we used a cause-related, non-experimental work method.
Participants
The action research included 33 distance learning PE lessons at the subject level of one primary school, which were taught individually alternately by two PE teachers.Pupils from Grades 6, 7, 8 and 9 (average age: 12.5 years old) were included.There was an average of 25 pupils in each class, but for the needs of our research, only six pupils (3 girls and 3 boys) were randomly selected; only those pupils who, based on the results of their PE report card, had developed motor skills within the average of Slovenian pupils of the same age, and whose parents signed a permit to participate in the research, were included in the selection.These pupils wore accelerometers during each lesson, which showed us exactly how many minutes and seconds were spent inactively and how many were spent in low, medium, and high-intensity activities.In total, the activity level during distance learning lessons was measured with 198 pupils (i.e., 99 girls and 99 boys).
Research design
The action research aimed to discover which distance learning models proved to be most effective during the Covid-19 epidemic.The research was conducted during the closure of schools in the second wave, in October and November 2020.In October and November, we carried out an observed and monitored pedagogical process of the PE, in which we made alterations with different teaching models (we tested five different distance learning models for PE), and the preparations for implementation began about a month before the start of teaching, in September 2020.In Model 1 (independent teaching), pupils were given instructions in advance with detailed descriptions of movements, equipped with sketches of the correct manner of performance and with the number of repetitions for performances of each element and a record of the approximate duration of the exercise.
In Model 2 (frontal teaching), the teacher conducted a distance learning PE lesson with the pupils by signing in on the Zoom platform through computers/mobile phones when the lesson was scheduled that day.Then they performed the elements by watching a direct demonstration and listening to the teacher's explanation and then repeating the exercises themselves.
In Model 3 (a combination of frontal teaching and group work), the work was organised in the same way as in Model 2, except that the teacher combined two different forms of within-class grouping during the lesson and thus, in addition to frontal teaching, group learning was also used, and the pupils were divided across the Zoom platform's rooms.Then further work was done around the stations in the sports hall.
In Models 4 and 5 (the flipped learning teaching model (with interactive assignments and games) and the flipped learning teaching model (with videos)), pupils were given pre-prepared material, which they had to process the day before the scheduled lesson.For example, in Model 4, pupils were given interactive assignments and games that predicted the content to be learned the next day when correctly completed.Interactive assignments and games, such as online puzzles with a picture of the correct performance of a certain element, sorting the pictures correctly in the correct order of movement, naming of movements by connecting words to the pictures and so on, and performing the movements of the elements presented along with the solved assignments.
In Model 5, pupils were not given interactive assignments and games in advance but prepared videos featuring demonstrations of the elements to be learned in the next lesson.Upon observing the demonstration, they had to try to perform the elements themselves.In Models 4 and 5, the material also contained one problem-solving activity that the pupils had to consider and then solve together in small groups in the next day's PE lesson.Thus, the sum of minutes in moderate and high-intensity activity by pupils in Models 4 and 5 was the result of both parts of the lesson (on the day of the scheduled lesson and the day before, when working with material or solving assignments/games and imitating the movements on video).
The teaching and observation of PE lessons were carried out for two weeks and two days, every working day from Monday to Friday (content of the lessons: athletics, natural forms of movement, games and general conditioning).Each day, we taught and observed two or three PE lessons, including at least one break in the length of an hourly lesson, as we had to replace and disinfect the gauges and straps and then place them (without making personal contact) in front of the door of the pupils whose activity level was to be measured in the following lesson.During the handover, the gauges were inserted into pockets on the straps, which could easily be attached to the body (so the pupils could fasten them around the waist over the T-shirt so that the gauge was on each individual's side during the lesson.Then the activity level of the six selected pupils (three girls and three boys) was measured.In four of the five types of distance PE models (independent work was conducted differently), lessons were taught individually by two alternating PE teachers.Via the Zoom online platform, they were observed by a PE didactics assistant who monitored the pupils' activity level and prepared diary records.In the independent work model, pupils could perform the work at any time during the day as they received work instructions and gauges in advance; they only needed to note the start time and finish time for the gauge data reading and enter it into the diary record.Based on these diary records, we completed the preparation of lessons and materials for pupils while entering the changes in the pedagogical process.
The purpose of the research was explained in writing to the parents of the pupils participating; complete anonymity was guaranteed for all.
Measuring instruments
Two different measuring instruments were used for the research: • 6 accelerometers MMOXX1.07(USB waterproof physical activity sensor 35×35×10 mm), by which we measured the pupils' activity intensity level; • Diary records.
The accelerometers measured the pupils' activity intensity level during the lessons: how many minutes they spent in low (<3 METs3 ), moderate (3-<6 METs), and high-intensity activity (> 6 METs) (Colley & Tremblay, 2011), which was also our main indicator of the effectiveness of lessons according to the individual model tested during a particular lesson.In addition, we used unstructured instruments (diary records) to monitor and record the course of the entire action research.For each model we tested, we had a separate diary in which we recorded before each lesson what we needed for the implementation (what kind of materials are required for the pupils and which programmes will be used to prepare it), how the pupils respond during the lesson and how they participate.After the lesson, we described the material needed and used during the lesson, the length of the lesson, recommended changes for the next lesson, and most importantly: the level of pupils' activity or how many minutes they spent in moderate high-intensity activity.
Statistical Analysis
The acquired data were processed with the IBM SPSS Statistics 22 software for MS Windows.We first calculated the basic statistics of pupils' activity levels for each model that we studied.After that, we used the Kruskal-Wallis test to check whether there were statistically significant differences between the individual distance learning models in the number of minutes spent by pupils in moderate and high-intensity activity and then used the Mann-Whitney test to check whether statistically significant differences occurred among the models, (each model out of the five studied was compared with each of the others) and which PE distance learning models were statistically significantly the most effective according to our quality indicator (pupils' activity level).
Results
Table 2 reveals data on the number of minutes spent by pupils in 33 taught and observed PE lessons in moderate and high-intensity activity.The data in the table are separated according to the individual PE lesson model used and within the model according to class.In Model 1 (individual work), pupils' average sum of minutes in moderate and high-intensity activity was the lowest, at 4.34 minutes.The lowest score for this model was 3.00 minutes, and the highest was 5.45 minutes.Model 1 was followed by Model 2 (frontal teaching) with an average value of 8.09 minutes (lowest result: 4.32 minutes, highest result: 11.08 minutes) and Model 3 (a combination of frontal teaching and group work) with an average value of 15.19 minutes (lowest result: 13.25 minutes, highest score: 16.38 minutes).Pupils achieved the highest sum of minutes in moderate and high-intensity activity in Models 4 (the flipped learning teaching model in combination with interactive assignments and games) and 5 (the flipped learning teaching model in combination with videos), where the minutes of their activity were added both days when performing pre-acquired assignments and during the lesson itself).For Model 4, the average value was 5.10+17.12minutes (lowest result: 4.13+16.32minutes; highest result: 6.08+18.41minutes), and for Model 5 as much as 8.09+20.06minutes (lowest result: 7.12+18.49minutes, highest result: 9.27+20.15minutes).Table 3 shows the values of the Kruskal-Vallis test, which was used to check whether there were statistically significant differences between the individual types of distance PE models used.It is evident from the Table that statistically significant differences (p <.001) appear among the individual models concerning the achieved levels of pupils' activity.
Table 3 Differences between PE distance learning models in terms of effectiveness
Next, we used the Mann-Whitney test to analyse which models have statistically significant differences or which models are statistically significantly more effective than others in terms of the intensity level of pupils' physical activity (Table 4).Table 4 shows the results of the Mann-Whitney test, which was used to check which distance PE models have statistically significant differences concerning pupils' activity.Each model used was compared with each other, and statistically significant differences having a risk of less than .05are marked in italics in the table.Table 4 shows that statistically significant differences in the achievement of high-intensity activity occur among all five distance PE models tested (the models in Table 4 are ranked from least to most effective).This tells us that each of the studied models is statistically significantly more effective than the previous one in terms of achieving the highest possible medium and high intensity of pupil activity during lessons; the least effective model was individual student work, followed by statistically significantly more effective results between each frontal teaching, combined frontal teaching and group work, flipped learning teaching model (with interactive assignments and games) and flipped learning teaching model (with videos).
Discussion
The most important contribution of the above-mentioned research is gaining insight into which distance teaching models can most effectively impact the higher achieved levels of moderate and high-intensity activity in distance PE lessons.Since the Covid-19 epidemic has temporarily altered the school teaching system (Petretto et al., 2020;Viner et al., 2020), distance learning is currently unavoidable, bringing a host of obstacles, including a lack of space, poor visibility of direct demonstrations, the teachers' inability to protect and assist the pupils' performances of elements, lack of tools, props, and similar issues.Nevertheless, teachers must teach pupils PE lessons in compliance with all restrictions; thus, they should select the model that would best allow them to transfer knowledge to pupils concerning the content they want to teach, concerning the barriers brought on by distance teaching of such content and, at the same time, to consolidate or transmit new learning material and thus enable a high level of pupils' activity in the most diverse and interesting way possible.
The research provided an answer to the research question of which PE distance learning models were most effective during the Covid-19 epidemic, resulting in a high level of pupil activity despite all limitations and participation from home.Each individual studied model brought both advantages and disadvantages or limitations in teaching due to the declared epidemic.Therefore, in the following, we conducted a more detailed analysis of the comparisons between each one.
In comparison with the other four models studied, individual work proved to be the least effective distance learning model for pupils (p = .001;.004;.004;.004).With this model, pupils were given instructions in advance for individual work at home or exercise in nature.Considering the average number of minutes spent in moderate and high-intensity activity, very low values were achieved here (M = 4.34 minutes).From the obtained results, we could conclude that without the teacher's direct supervision, not all pupils performed the elements qualitatively and correctly.Based on interim results and diary records of their obligations, we tried to increase the duration of independent work and the number of repetitions during the research, but their minutes spent in moderate and high-intensity activity did not improve statistically significantly.This model brought advantages such as unlimited space and an unnecessary internet connection, as the elements could be performed outdoors, and disadvantages such as insufficient teacher supervision, poor performance and perhaps a poorer understanding of instructions and poorer performance of required exercises without direct demonstration.In this regard, Goudas & Magotsiou (2009) state that pupils in their study felt statistically significantly better about individual learning than group learning, as they expressed discomfort with the implementation of elements in group learning.
Frontal teaching via the zoom platform has proven to be a slightly more effective model.Here, pupils achieved slightly higher results (M = 8.09 minutes) than the individual work in terms of minutes spent in moderate and high-intensity of pupils' activity.Here, we expected even higher values as it was a frontal form of lesson, except that both the pupils and the teacher participated in the work from home.The frontal form of learning makes it easier to control all the children, as we provide instructions (demonstration, explanation, etc.) to all the pupils simultaneously (Zajec, 2009).The negative feature of this form of learning is that it is more difficult to use differentiation and individualisation in this way because the tasks are usually the same for all pupils, which means that some may be too much or too little demanding and do not encourage imagination, independent thinking, curiosity, and creativity (Kavčnik, 2008).In our study, the teacher taught them in a similar way as in school, yet tailored to the situation, with explanations, direct demonstrations and repetition.The problems encountered here included poor visibility of the direct demonstration (e.g., body position (bending backwards, forwards, to the side, etc.), lift height, gaze orientation, etc.), the teachers' inability to observe and correct all pupils' performances at once, negative exposure of each pupil in front of all the classmates when correcting performances, spatial problems in performances, problems with the Internet connection and, consequently, several times, worse communication between teacher and pupils.Despite the constant monitoring of pupils by the teachers, which was feasible compared to the previous model, due to the aforementioned obstacles that frontal teaching brings in distance learning of PE, the pupils achieved significantly low values in moderate and high-intensity activity.
Depending on the intensity of activity shown by pupils, frontal teaching was followed by a combination of frontal teaching and group work at stations (M = 15.19 minutes).Such a result is already quite high in terms of distance work as when performing work in the sports hall; the goal is to achieve at least 50% of the time devoted to the PE (at least 22.5 minutes) in moderate and high-intensity activity (Hollis et al., 2016).With distance work, the lessons are shorter (approx.30 to 35 minutes), so even half of this time spent in moderate and high-intensity activity is a good enough result.This model had similar problems as with frontal distance learning, except that the pupils here were not so negatively exposed as the corrections of their performances were heard in small groups.The advantage of this form of work was that the teacher could first explain the material to everyone and demonstrate the work that awaited the pupils at each station; then, the performances were rehearsed in front of a small group of classmates at their stations or rooms.Next, the teacher randomly divided them into stations, where they remained until the end of the lesson.In the meantime, the teacher joined the rooms, observed their work, gave them additional instructions, motivated them and corrected their performances.After about three to five minutes, the assignments at each station were switched as if the pupils were moving to the next station.In this way, the work was kept interesting and diverse, ensuring the pupils did not start to get bored, and as a result, they reached higher levels of activity intensity.This form of work also allowed teachers to monitor the work of each student quite effectively.
The combination of frontal teaching and group work at stations was followed by the flipped learning teaching model, depending on the intensity of the pupils' activity, in which pupils received material in the form of interactive assignments and games (M = 22.22 minutes; the result is the sum of both workdays).Such a teaching model has proven to be the second most effective in achieving high-intensity activity during lessons.Pupils received the learning material through interactive assignments and computer games, which was a great approach to learning.It is largely known that pupils spend too much time in front of computers (Sharma & Majumdar, 2009), and during the Covid-19 epidemic, this amount of time increased due to compulsory social distancing from their peers (Montag & Elhai, 2020), and their motivation for classical learning decreased (Dietrich et al., 2020).So, we combined the teaching of materials through interactive assignments and games so that the time they spent playing games also brought them some new knowledge.They enjoyed playing computer games (puzzles, composing terminology, connecting words to pictures, rearranging the order of movements and so on, while imitating movements and performing various elements demonstrated through assignments and games, etc.).By solving assignments and playing games, they gained minutes of activity on a day when there is no PE lesson scheduled, but at the same time, they received insight into the next PE lesson and group problem-solving activity with classmates.In the lesson itself, the teacher initially only briefly explained the content of the lesson, which they learned a lot about the day before through games, and then randomly divided them into groups, where they could immediately start solving the task, which they also already got with the assignments and games, so no time was wasted here by giving additional instructions.The pupils solved assignments in groups with movements, so they were active for nearly the entire time of group work; simultaneously, no one was negatively exposed to ignorance, as they all got at least minimal insight through the previous day's assignments.In our action research, such a model has proven to be extremely effective.However, its disadvantage is the large amount of time such preparations took from teachers, as we had to design interactive assignments and games according to the content of each lesson and pass them on to the pupils.Certain problems also occurred with pupils who had older computers and poorer internet connections as newer computer systems only supported certain games.
The statistically significantly most effective model in terms of the number of minutes spent in moderate and high-intensity activity proved to be the flipped learning teaching model, which was organised in the same way as the previously described model, only that the material received in advance by pupils was not in the form of interactive tasks and games, but in the form of videos (M = 28.15minutes; the result is the sum of both days of work).The videos showed a direct demonstration of each element they needed to learn for the upcoming lesson (demonstrations were recorded from different angles and at different speeds of implementation, thus adding differentiation to learning; these could be watched and performed at a slower pace by the weaker ones or at a more demanding faster pace by the stronger ones, and a demonstration of incorrectly performed elements so that pupils would not repeat such mistakes).
Furthermore, the recordings contained music and slipups during the recording sessions, making them more interesting for the pupils, who wanted to replay them several times (their opinions on the performances were also included in the diary records for the intermediate upgrade of teaching preparations).Such a model proved to be the most effective but also the most demanding for teachers to implement.In addition to the large amount of time necessary for preparing the implementation of this model (model 5 requires even more time than model 4; about 5-6 hours to edit a recording only 8-10 minutes long, excluding planning and recording), for such preparations, teachers urgently need help with the filming, as the recordings of movements in direct demonstrations of elements are not visible enough.
Based on the stage of the learning process, each PE distance learning model that we studied included lessons that provided new learning material and consolidation, enabling us to compare the effectiveness between them.
Conclusions
Distance learning of PE has become a special challenge for those who teach this subject, as they bear part of the responsibility to achieve the recommended daily amount of physical activity of individuals, which is extremely important for maintaining health and strengthening the immune system and consequently to combat Covid-19 disease effectively.Recommendations and measures during the epidemic, such as staying at home, closing parks, sports halls, fitness centres and similar, were necessary to curb the spread of the disease but had a significant effect on reducing the recommended daily amount of physical activity of individuals (Siordia Jr., 2020).Of the five distance PE models studied, only two proved to be extremely effective in achieving a sufficient amount of moderate and high-intensity activity, namely the flipped learning teaching model in combination with the material in the form of interactive assignments and games and the flipped learning teaching model in combination with the material in the form of videos.The model involving a combination of frontal teaching and group work at stations also provided satisfactory results as the pupils spent about 50% of the time of shortened PE distance learning lessons (about 30-35 minutes) in moderate and high-intensity activity.
In the action research, we examined and observed 33 distance-learning PE lessons, in which we introduced five types of distance PE models that we designed based on theoretical models and transferal of teaching practices held in schools.Based on the designed models, we prepared the material and implementation plan for each lesson according to the content with the help of accelerometers and diary records.After conducting the measurements, as the final part of the action research, we presented the concept of working with five PE distance learning models to all PE teachers working at the school.We provided them with data on the intensity of pupil activity in each model, the daily records of lessons taught, interactive tasks with access passwords and instructions on how to design new ones for other content that were not included in our research, and videos with notes of how we filmed them ourselves and edited them into meaningful teaching material.Despite good preparation, the research had several limitations, particularly the inclusion of only one school and consequently a smaller sample group, incomplete numerical distribution of lessons concerning each model (the most effective models were studied at the minimum number of lessons) and inequality of learning content among some models (the MET level also depends on the content of exercise and didactic level, which varied in the study), making generalisation limited.Additional research regarding distance teaching of PE will be conducted on a larger sample group with various ages of children and with a larger number of hours for each type of learning content in each model.Distance education is currently a major concern for all teachers as it is unknown how long such a situation will last or when it might recur.For this reason, they must be maximally prepared to conduct effective distance learning lessons.For this reason, in the future, we aim to research and discover the effectiveness of new distance learning models for PE, which we have not been able to include in the current action research.
Table 1
Five PE distance learning models
Table 2
Basic statistics on the sum of minutes of moderate and high-intensity activity of pupils per each type of PE distance learning model
|
v3-fos-license
|
2018-04-03T04:45:39.493Z
|
2015-02-19T00:00:00.000
|
212210
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.jidc.org/index.php/journal/article/download/25699497/1255",
"pdf_hash": "f6db529b8fed8e3b051f358d00881e14ebd856cf",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46459",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "116dabb219945c10981b109571f359dc672d8713",
"year": 2015
}
|
pes2o/s2orc
|
Multidrug resistance in Pseudomonas aeruginosa isolated from nosocomial respiratory and urinary infections in Aleppo , Syria
Introduction: Pseudomonas aeruginosa represents a serious clinical challenge due to its frequent involvement in nosocomial infections and its tendency towards multidrug resistance. Methodology: This study uncovered antibiotic susceptibility patterns in 177 isolates from inpatients in three key hospitals in Aleppo, the largest city in Syria. Results: Exceptionally low susceptibility to most routinely used antibiotics was uncovered; resistance to ciprofloxacin and gentamicin was 64.9% and 70.3%, respectively. Contrarily, susceptibility to colistin was the highest (89.1%). Conclusions: Multidrug resistance was rife, found at a rate of 53.67% among studied P. aeruginosa isolates.
Introduction
Pseudomonas aeruginosa is a Gram-negative, obligate aerobic, and oxidase-positive bacillus.In healthy subjects, P. aeruginosa is part of skin flora; however, it can cause serious morbidity in individuals with predisposing factors such as considerable body injury or certain pulmonary conditions (e.g., cystic fibrosis) [1].This opportunistic pathogen is a common cause of hospital-acquired infections and, to a lesser extent, community-acquired infections.Examples of infections include pneumonia, urinary tract infections (UTIs), bacteremia, and wound and burn infections.Nosocomial pneumonia due to P. aeruginosa is usually associated with poor survival rates [2,3].
Despite rapid advancement in healthcare provision, partially through the introduction of highly effective antibiotics into clinical practice, P. aeruginosa remains a pathogen to be reckoned with.This is certainly the case at intensive care units (ICUs), particularly in patients who are put on a mechanical ventilator [3].Nosocomial bacteremia accounts for over 50% of ICU infections, and the causative pathogen is frequently found to be P. aeruginosa [4].
Multidrug resistance (MDR) displayed by P. aeruginosa is usually found during empirical therapy or following lengthy exposure to antibiotics.P. aeruginosa has myriad intrinsic and extrinsic factors that work together to cause resistance to structurally and functionally dissimilar antibiotics.Certain resistance mechanisms that have been found in other bacteria may also play a role in MDR in P. aeruginosa [5].Importantly, P. aeruginosa can express chromosomally encoded multidrug efflux pump genes coupled with outer membrane impermeability.P. aeruginosa possesses a spectrum of virulence factors, including a type III secretion system, which contributes massively to its pathogenicity [2].
Additionally, P. aeruginosa is capable of developing acquired antimicrobial resistance.The latter emerges due to mutations arising in chromosomally-encoded genes and/or horizontal gene transfer of resistance genes, a variety of which can be carried together on transferable structures (integrons) [6].Finally, P. aeruginosa is adept at forming biofilms on surfaces, so it can survive antibiotics and other disinfectant chemicals as well as the body's innate and adaptive inflammatory defenses [2,7].
Studies from different geographical localities worldwide highlight the dilemma of increased occurrence of infections with MDR P. aeruginosa.These pose a heavy human and economical burden because antibiotics are costly and frequently ineffective, leading to high mortality rates [2][3][4][5][6][7].Informative data about these types of infections are lacking from Syria; therefore, this study was conducted to probe for prevalence rates of MDR among P. aeruginosa isolates in northern Syria.This study focused on susceptibility towards antimicrobial agents that are routinely administered in local hospitals.
Methodology
This cross-sectional study was conducted between September 2011 and September 2012 at three major hospitals in Aleppo, Syria: Ibn Rushd Hospital, Aleppo University Hospital, and Al-Basel Center for Heart Disease and Cardiac Surgery.The relevant approval was obtained from the ethics committee of Aleppo University.Samples were taken from ICU patients with lower respiratory infections (LRIs) and from patients with nosocomial UTIs.The mean age of the patients was 52.7 years.All lower respiratory and urinary samples came from inpatients who had been hospitalized for ≥ 48 hours.A total of 177 nonrepetitive isolates of P. aeruginosa were found, and the guidelines of the Clinical and Laboratory Standards Institute (CLSI) were followed for pathogen identification and testing [8].Female patients contributed 59 of the isolates, while the rest came from male patients.P. aeruginosa was isolated and identified using standard biochemical reactions, and the results were confirmed using the Phoenix Automated Microbiology System by BD (Becton, Dickinson and Company, Franklin Lakes, USA).Afterwards, bacterial isolates were tested for susceptibility to antibiotics by the standard Kirby-Bauer disk diffusion method.Twenty-three antimicrobial susceptibility testing disks (Oxoid, Basingstoke, UK; codes are listed in Table 1) were used, and results were determined and interpreted based on CLSI guidelines.Multidrug resistance was defined as resistance to three or more unrelated antibacterial agents.
Results
This study sought to characterize the pattern of antibiotic resistance among P. aeruginosa isolates obtained from three major hospitals in the largest urban center in Aleppo, northern Syria.A total of 138 isolates were obtained from ICU patients with LRIs, and the rest (39 isolates) came from patients with UTIs.Around a quarter (26%) of the LRIs isolates and over half (59%) of the UTIs isolates came from women.Generally, tested P. aeruginosa isolates displayed high levels of antibiotic resistance to the majority of antibiotics that are routinely used in local clinical settings (Table 1).Levels of susceptibility to amoxicillin/clavulanic acid and nitrofurantoin were the lowest: 2.3% and 3.2%, respectively.Susceptibility to each of clarithromycin, tetracycline, azithromycin, cefotaxime, and doxycycline never exceeded 10%.Considerably higher susceptibility was obtained with imipenem (56.1%) and meropenem (59.1%); however, the most optimal response in vitro was obtained with colistin (89.1%).
Discussion
To put our results in the right context, a comprehensive multicenter study by Sader et al. [9] was used as a major referential point.The latter study covered 31 medical centers located in 13 European countries plus Turkey and Israel, and it reported on antibiotic resistance and MDR of P. aeruginosa.Generally, susceptibility rates uncovered in this study were alarmingly lower than these reported by Sader et al. [9].Examples include the fourth-generation cephalosporin, cefepime (21.7% in Syria versus 71.4% in the 15 countries covered in [9]), the fluoroquinolone, levofloxacin (34.7% in Syria versus 64.1% in the 15 countries covered in [9]), and piperacillin/tazobactam.Additionally, resistance to ciprofloxacin was as high as 64.9%.The latter antibiotics are misused in Syria widely and frequently due to their low prices and the ease with which the general public can obtain them without a prescription.
Resistance to gentamicin (70.3%) was much higher than to amikacin (40%).It is worth noting that the latter antibiotic is used less frequently in local healthcare establishments, mainly due to its overly publicized ototoxicity and nephrotoxicity.Also, amikacin is far less vulnerable to the destructive effects of aminoglycoside-modifying enzymes produced by P. aeruginosa.Nevertheless, the Syrian figures are much higher than their counterparts from Europe, Turkey, and Israel [9], where resistance rates to gentamicin and amikacin were found to be 22% and 11.4%, respectively.
Nearly 54% (95 isolates) of tested P. aeruginosa isolates were MDR.Reported rates of MDR among P. aeruginosa varied widely based on sample size, source of samples, and geographical locality.For instance, numerous studies from Iran reported on multidrugresistant P. aeruginosa.One of the larger studies [10] reported an extremely low rate (5.46%).A higher rate (31.9%) was reported by Sader et al. [9] from two Middle Eastern countries and 13 European countries.Still, the latter rate is significantly lower than the figure uncovered in this study.
There have been many reports worldwide describing the trend of increasing antibiotic resistance in general and MDR in particular among P. aeruginosa [11,12].For example, a nine-year American study showed a considerable increase in multidrug-resistant P. aeruginosa, from 1% in 1994 to 16% in 2002.This is a good indicator of the degree of increase in MDR in the largest industrialized country in the world [13].On the bright side, reviving some old antibiotics might offer an alternative way to fight MDR [14].Our results are necessary for guiding local health authorities in Syria; however, on a global scale, our findings help to fill a significant information gap by demonstrating such a high rate of antibacterial resistance.Our data is expected to encourage correct use of antibiotics and to persuade practitioners to rely more on antibiotic susceptibility tests in making treatment decisions in individual cases.
Table 1 .
Antibiogram depicting antimicrobial susceptibility of Pseudomonas aeruginosa isolates collected between September 2011 and September 2012 in Aleppo, Syria
|
v3-fos-license
|
2020-05-31T13:05:16.280Z
|
2020-05-30T00:00:00.000
|
219105623
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jonm.13054",
"pdf_hash": "470e86438249e14fcbb4da70a4a4363233ebe6c3",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46460",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"sha1": "706b9d9feeac7e728e218b010cee203ee325b654",
"year": 2020
}
|
pes2o/s2orc
|
Nurse managers in perioperative settings and their reasons for remaining in their jobs: A qualitative study
Aim: The study describes what helps nurse managers maintain the strength to keep going as leaders. Background: Good leadership is important for the quality of patient care, patient satisfaction in care and efficiency. Many nurse managers stay on despite challenges at work. Methods: Twelve nurse managers were interviewed. Data were analysed by systematic text condensation according to Malterud. Results: The results were as follows: A—Walking side
| INTRODUC TI ON
The management role in surgical departments requires high skills and competence, including providing health care, leading staff, organising workload and handling ethical and legal questions.Nurses' work performance and creativity is dependent on caritative leadership (Bondas, 2003;Gu, Hempel, & Yu, 2019).Caritative leadership may affect nurses' intentions to remain in their jobs (Arakelian, Rudolfsson, Rask-Andersen, Runeson-Broberg, & Walinder, 2019a); nurse and patient satisfaction; and outcomes in care (Wong, Cummings, & Ducharme, 2013).Nurses describe their nurse manager both as a facilitator (Arakelian et al., 2019a) and as someone who betrays or is dismissive (Logde et al., 2018).Furthermore, the nurse manager's behaviour towards his/her employees has a significant impact on the employees' perception of formal and informal power as well as on their professional skill development (Laschinger, Wong, McMahon, & Kaufmann, 1999).Moreover, nurse managers affect their employees, and clarity in management structure also means lower levels of work tension among employees and increased efficiency at work (Gunawan, Aungsuroch, Nazliansyah, & Sukarna, 2018).
Furthermore, studies point to the challenges and complexity of nursing management as it contains both the rules and regulations of administration and the more personal but less defined aspects of human caring, both of which represent very specific demands.Laschinger et al. (1999) indicated that nurse managers present a link between caring for staff and an administrative responsibility.The nurse managers' leadership sometimes includes a conflict between being clinicians and at the same time being distanced from clinical practice.When priority was given to clinical practice, leadership tasks often came second (Sorensen, Delmar, & Pedersen, 2011).Furthermore, despite their clinical expertise, many nurse managers are exposed to a management role without leadership training (McCallin & Frankson, 2010).There are great demands on nurse managers, who must harmonize the demands of management, unions and staff.They are expected to be strategic planners, representatives of the human resources (HR) department, quality specialists and clinical experts (McCallin & Frankson, 2010).Nurse managers are also responsible for ensuring a good, healthy work environment for their employees and excellent care for their patients (Anthony et al., 2005).Gunawan et al. (2018) indicated that nurse managers are high achievers who struggle with conflicting feelings: on the one hand, being a nurse manager is a positive challenge, while on the other hand, nurse managers have negative experiences due to the personal conflict of being removed from their front-line care job, which consequently makes them want to leave their unit.According to Adriaenssens, Hamelink, and Bogaert (2017), job demands, job control and social support from team managers were predictors of well-being in nurse managers, which in turn included job satisfaction, work engagement, less psychosomatic distress and fewer turnover intentions.
To meet and include all dimensions of nursing management, the idea of caritative leadership was developed to combine nurse managers' administrative responsibilities with caring based on human mercy and love.Caritative leadership was derived from Eriksson's theory of caritative caring (Eriksson, 1992(Eriksson, , 1997) ) and was further developed as the theory of caritative leadership (Bondas, 2003).Caring administration means seeing the uniqueness of the employees and their abilities to 'minister to' or help the patients (Bondas, 2003).To expand the conceptual understanding of caring in nursing leadership using a meta-ethnographic analysis, five relation-based rooms were identified in 'the house of leadership', each representing one aspect of leadership.Nurse leaders move back and forth between these rooms.The rooms are the patients' room, the staff room, the organisational room, the superior's room and the secret room.The superior's room is about peer relationships, and the secret room is a place where the manager can be alone to reflect and think things over.
The discussion regarding caring in nursing leadership indicates that caring is a conscious movement between the different rooms mentioned above (Solbakken, Bergdahl, Rudolfsson, & Bondas, 2018).
Therefore, it can be assumed that the opportunity of metaphorically walk between different rooms indicates the development of a caring atmosphere, which influences the desire to continue as a nurse manager.Consequently, our study focuses on the elements that lead to the perioperative nurse manager's wish to remain in the job.
| Aim
The study describes what helps nurse managers maintain the strength to keep going as leaders.
| Design
A qualitative prospective design was used.
| Study participants
The inclusion criteria were as follows: nurse managers with more than one year's experience in perioperative settings, that is anaesthesia or surgical departments.Fifty-five nurse managers (four men, 51 women) were invited to participate, of whom 12 accepted, all women, between 35 years and 63 years of age (mean age 53 years).
The participants had one to 18 years of experience as nurse managers.Six of the participants were from university hospitals, three from small hospitals, two from central hospitals and one from a regional hospital, all from different parts of Sweden.In Sweden, university hospitals are large hospitals that, in addition to health care, also include medical research and education.County hospitals offer highly specialized care in comparison with local and rural hospitals.
Convenience sampling was used to enable participants from different age groups and work experiences to be included.
| Procedure
The human resources departments of 12 hospitals (five university hospitals and seven county and minor care hospitals) in Sweden were contacted for information about nurse managers in perioperative settings who met the inclusion criteria.The participants were contacted and invited to take part in the study via their work mail address.A reminder was sent one week later to those who did not answer the invitation the first time.After receiving their informed consent, contact was made to schedule an interview session.
Interviews were conducted via telephone in nine cases due to the long distance to the participants' workplaces and homes, and faceto-face in three cases upon requests from the participants.The interviews lasted between 54 and 74 min (mean 63 min).There were no differences between the two interview techniques or quality.
After the first interview, we deliberated on the interview guide regarding whether we had received the answers we sought using the questions in the guide.No changes were made to the interview guide after that interview.
| The interview guide
A semi-structured interview guide was used with main questions such as the reasons and prerequisites for working as a nurse manager, plus follow-up questions concerning the driving force and challenges of the position, followed by narratives regarding successful days and days not so successful.Probing questions such as 'Could you explain more?' or 'What do you mean by that?' were also used to deepen the interview.
People's life-world experiences form the focus of this method.The analysis was conducted according to the following steps: a-interviews were transcribed verbatim and were read several times for our team to grasp the whole picture; b-preliminary themes were identified and meaning units (sections of text that were about the topic of interest) were coded; c-condensation, the codes that were about the same topic were grouped together followed by the meaning units; d-re-contextualization consisted of creating the final themes and writing the content for these themes.Finally, the interview text from each interview was read through again with regard to the themes.
No new information was found after seven interviews.However, the rest of the interviews were analysed to make sure that no further information could be identified.Authors EA and GR conducted all the steps in the analysis independently.Authors ARA and RW each read half of the interviews and the results to confirm the results.
The final themes were a result of several discussions between the authors.
| Ethical considerations
The study follows the regulations in Declaration of Helsinki (World Medical Association, 2013)
TA B L E 1
The themes and their contents
| Findings
Five themes were identified (Table 1), which are presented below with citations.
| Walking side by side with my employees
To create a sense of togetherness, a conversation 'room' was created.
Appointments were booked for these discussions, or spontaneous conversations were held with the employees.Many talked about 'con- To let the employees formulate their own ideas was another source of empowerment, as Nurse Manager 6 stated: 'These (the employees) are incredibly wise people…with their own driving force, bringing their own ideas so I am motivated and stimulated to continue… Had it not been for a group of employees, I probably would not have been here (if it were not for them)'.
| Knowing that I mean something to my employees
Being honest and fair, the participants talked about receiving their employees' confidence almost as an honour, giving them the strength to go on.They always felt welcomed to their workplace both by their employees and by the physicians or anaesthesiologists with whom they worked.Nothing meant more than having an employee tell his/ her nurse manager that 'I wanted to come to this surgical department because you are here' (Nurse Manager 3).As one of the participants explained, 'It gives one such joy to know that one can actually mean something to someone' and that 'It is impossible to put a price on that' (Nurse Manager 3).Nurse managers learned more about their employees' personal lives, which they tried to encourage to the best of their ability.They often served as support for their employees in their personal lives, which was also confirmed by the employees.
This, in turn, gave a feeling of warmth and joy to the nurse managers.
| Talking to myself-asking myself tough questions
Participants in this study indicated that they gathered strength by having an internal dialog with themselves, asking themselves tough questions or pondering things while they were jogging, taking long walks, travelling by train to their workplace or exercising longer at the gym, to 'process' as Nurse Manager 4 expressed it.
…I personally have a rather tough internal dialog with myself…this was no good; in this I could have done better… I have very little time to think, which is a big challenge… it is a part of my mission to keep my head up and think ahead … That's why it works for me to commute.It's really nice to sit on the train… to have time to think and plan… so the time on the train is really important to me.
(Nurse Manager 8) This was a way of preparing oneself for difficult situations, for example if a decision was to be communicated to employees that 'I might not support 100% but we don't have any choice but to go along with it…' (Nurse Manager 7).Looking ahead and looking forward were important tasks as a nurse manager, that is not to sit still in the same place but to move forward and 'drag' or develop the enterprise.Other forms of preparation were also described such as 'mind mapping', thinking about different scenarios and results before, for example delivering a tough decision to one's employees or having a difficult conversation.The nurse managers were their own critics, questioning whether they were their best self in their meetings with their employees or not.Thus, being one's best self or doing one's best was discussed frequently during the interviews.The participants were aware that they were not best at doing everything but they always, wholeheartedly, wanted to 'at least do my best' (Nurse Manager 10).
| Having someone to talk to, to decrease the feeling of being alone
The participants said that having the support of co-nurse managers and having someone to talk to were important prerequisites for building their inner strength.They needed to talk about everyday topics or specific cases, and ask for advice.
… I need to have somebody to talk to and to know that she is there … I think it is important for me as a firstline manager that I don't end up in a vacuum between my staff and myself… (Nurse Manager 5) All of the participants were very well aware that their position meant that they were 'not a part of the group', which created a need to have someone to talk to.
…You can feel lonely at the top…mostly at lunch hour or when it's time for a coffee break… (Nurse Manager 8) The support group often included other nurse managers from the same perioperative settings or other organisations in the hospital.
However, not everyone had a network of co-nurse managers to whom they could turn with their questions or problems.A few had a coach or a mentor or a psychologist.
… these three (co-nurse managers) who are health care managers, plus my superior manager, we have a strategy meeting every week where she (superior manager) helps us with our questions.Had she not been in that position, I would not have applied for the job here.
| Leading and managing in my own way-the fear of not succeeding is my motivation
The nurse managers expressed that they were strong and confident in their position as leaders and managers.Having life and work experience, and knowing the health care business, they now had authority and were respected.
… My role carries a lot of weight with the surgeons, I can lean on my co-workers for support, I am confident in myself and I am confident in my life and I am confident in my role as manager.
(Nurse Manager 3) Many also talked about being successful in making changes or in 'putting one's foot down' (Nurse Manager 7) against authorities, taking responsibility for their employees' rights or ensuring that what they did was 'good enough'.They also had the courage to put their own mark on their leadership, in many cases by watching and learning from their own former managers and leaders.How to act or not to act as a nurse manager was discussed frequently and focused on their feelings back when they were employees themselves.
… an incredible number of demands from so many places; you are pulled in too many directions…you set a goal that is so high that you never reach up to it … you always try to run a little faster and do much more… you have to dare to slow down… go easy on yourself about what is good enough… (Nurse Manager 9) The fear of failure was another driving force mentioned by one of the participants.Several participants mentioned that they were trying to satisfy their employees, their superior managers and themselves, and that all three were connected.The relationship with the superior manager and gaining his/her confidence and trust, and having that trust returned, was an essential source of empowerment to the participants.
… I have set up a goal, I'll get there… I think it's probably the fear of failure that motivates me… (Nurse Manager 8) A goal was set and a promise was made, a promise that the nurse manager would not betray.
…The goal is to become XXX's best surgical department.Together with my four managers (this person is Head Nurse Manager of four first-line managers), we are natural drivers, together with the employees.
This is not a one-man show; it is something we do together… (Nurse Manager 8)
| D ISCUSS I ON
The findings of this study are discussed through the lens of the caritative leadership theory presented by Bondas (2003) and the meta-synthesis of caritative leadership resulting in the metaphors of different rooms in the 'house of leadership' by Solbakken et al. (2018).
The first theme of this study concerned doing good for one's employees, as described by Bondas (2003), recognizing their uniqueness and their potential for ministering to the patients.According to Bondas (2003), nurse managers are responsible for caring for both the patients and their employees' dignity.Furthermore, studies suggest that good leadership affects nurses' work performance (Gu et al., 2019), thus affecting patient care (Bender, Williams, Su, & Hites, 2017).Good leadership also means that specialist nurses in perioperative settings want to remain in their positions (Arakelian et al., 2019b).In their journey of doing good, the nurse managers go back and forth between the 'staff room' and their 'secret room' (Solbakken et al., 2018).
In the 'staff room' (Solbakken et al., 2018), nurse managers helped their employees to see and believe in their own capacity.
They also learned more about their employees' personal lives.
They were anxious to be on their employees' side, and walk together with them, meaning that the employees' wishes and stories were recognized by Bondas (2003).Being strong and fair, and receiving the employees' trust were seen as important sources of strength for the nurse managers.A nurse manager's reward was when employees chose to work where they knew a certain nurse manager worked.
In this study, the participants stressed the importance of showing their employees respect and being a facilitator.Arakelian et al. (2019a), Bondas (2009), Uhrenfeldt and Hall (2009) all indicated that by knowing their employees and their competencies, nurse managers were able to make individual plans for their employees' development.Honkavuo, Sivonen, Eriksson, and Nåden (2018) pointed out the sense of togetherness, which was also one of the findings in our study, and discussed the concept of 'ministering' in nursing administration.This was described as a mutual and two-way relationship between the leaders and their employees, helping and benefitting both parties.
Three of the themes in the current study are based on the concept of being in the 'secret room'.The 'secret room' is used as a way of preparing oneself for difficult situations and is a place to be with oneself to gather strength and to have an honest dialog with oneself.Reflecting on things or preparing oneself for difficult tasks was described as a way of moving forward.Moving forward implies not standing still, but it also means trying to develop oneself as a leader both emotionally and professionally.This movement was confirmed by Raelin (2016) who stated that leadership accrues in relationships with employees and that leadership and reality are mobile and changeable.
Working hard to satisfy both the leadership above and their employees, the study participants agonized about whether their goals were set too high or whether their performance was good enough.Edmondson, Higgins, Singer, and Weiner (2016) pointed out that individuals tend to alter their performance due to engagement in their work and that this is common in organisations with both theoretical and practical orientations, such as in health care.Adriaenssens et al. (2017) showed that support from one's superior manager was one of the predictors of well-being among nurse managers.
Since nurse managers may struggle with the inner conflict caused by being administrators, caring for staff and being clinicians (Gunawan et al., 2018;Laschinger et al., 1999;McCallin & Frankson, 2010;Sorensen et al., 2011), and the fact that some of the participants in this study became managers with no initial training and education, it is even more important to have training and support from superior managers (Laschinger et al., 1999).Furthermore, Hagerman, Engstrom, Haggstrom, Wadensten, and Skytt (2015) described the importance of having structural, supportive conditions as nurse managers, but also stressed that nurse managers must believe in their own competence and abilities.
The importance of operating according to the core of caritative leadership and commuting between the 'staff room' and the 'secret room' in the house of leadership (Bondas, 2003;Solbakken et al., 2018) was obvious findings of this study.These elements gave nurse managers the strength to go on as leaders.In addition, they wanted to be offered continuous support in different forms, both administrative and emotional, along with proper education and training throughout their time as leaders (Gundrosen, Thomassen, Wisborg, & Aadahl, 2018).
| Limitations
Credibility and transferability were guaranteed by describing the procedure and the data analysis as clearly as possible.Authors EA and GR had a pre-understanding of perioperative settings, which led to increased credibility and confirmability, thus improving the researchers' understanding of the phenomenon being studied (Nakkeeran & Zodpey, 2012).The topic of pre-understanding was discussed by all authors ensuring that the authors' pre-understanding did not interfere with the interpretation of the results.In this study, both face-to-face interviews and telephone interviews were used.No differences were found between the two interview methods, and they were all included in the study.No male participants were included in the study: this mirrors reality, as most nurse managers in surgical departments are woman.
bringing us closer together' (Nurse Manager 9), having an opendoor policy.One of the participants had created structured 'walking employee dialogs', which she undertook with every employee annually, walking side by side and outside the operating room.… It's the best management training, to talk …We have co-worker walks.We usually walk for between one and two hours… it's great because you think much better and much more is said when you walk than when sitting behind a desk… (Nurse Manager 4) At the same time, the nurse managers were to be 'the engine' (Nurse Manager 10) motivating what their employees did, and treating them with respect.… Together we can do something good….I like to meet people with respect.You usually get so much back…… (Nurse Manager 8)The nurse managers described a journey within oneself as a leader, learning how one chose to communicate with one's employees, or how one acted when meeting with one's employees, acknowledging their own power.This was expressed as 'one should be very humble with the power that one has (as a leader) affecting other people's lives…' (Nurse Manager 9).A sense of satisfaction or a 'cool feeling' (Nurse Manager 1) was experienced 'seeing the strengths of those you worked with' (Nurse Manager 9), helping them to believe in their own knowledge, ability and capacity, and seeing how several of them grew and achieved greater positions, and became proud of themselves.…it is one of the greatest joys as a manager to work with all employees and see them develop and take new roles and reach places when they hadn't really believed in themselves…I create the conditions for those I lead… (Nurse Manager 9) …private matters….several have said "If you hadn't been there for me back then, I do not know what I would have done; you were my greatest support in my separation… and it is absolutely amazing that you get such trust from employees… (Nurse Manager 3) Receiving responses from the employees made the nurse managers learn to improve themselves and to 'grow in my role', as one of them stated (Nurse Manager 1).
I
start to mind map what this meeting/issue is about, what do I think is its focus, what do the employees think is the focus and what are the various parts that are included… and so I begin to try to structure my thoughts … where I should begin, in order to end in the right place… (Nurse Manager 10) and local ethical guidelines and regulations (Centrum for Research Ethics & Bioethics, 2018).
|
v3-fos-license
|
2018-04-03T01:33:53.492Z
|
2016-10-19T00:00:00.000
|
4755018
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/srep35666.pdf",
"pdf_hash": "b9c9ca545f0e8cb3f793dadca99a701dc97c71fe",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46461",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "b9c9ca545f0e8cb3f793dadca99a701dc97c71fe",
"year": 2016
}
|
pes2o/s2orc
|
Impact of sialic acids on the molecular dynamic of bi-antennary and tri-antennary glycans
Sialic acids (SA) are monosaccharides that can be located at the terminal position of glycan chains on a wide range of proteins. The post-translational modifications, such as N-glycan chains, are fundamental to protein functions. Indeed, the hydrolysis of SA by specific enzymes such as neuraminidases can lead to drastic modifications of protein behavior. However, the relationship between desialylation of N-glycan chains and possible alterations of receptor function remains unexplored. Thus, the aim of the present study is to establish the impact of SA removal from N-glycan chains on their conformational behavior. We therefore undertook an in silico investigation using molecular dynamics to predict the structure of an isolated glycan chain. We performed, for the first time, 3 independent 500 ns simulations on bi-antennary and tri-antennary glycan chains displaying or lacking SA. We show that desialylation alters both the preferential conformation and the flexibility of the glycan chain. This study suggests that the behavior of glycan chains induced by presence or absence of SA may explain the changes in the protein function.
Results
Sialylated and non-sialylated bi-antennary chains display different conformations. In order to identify the main conformations adopted by sialylated and non-sialylated glycan chains, we performed a clustering analysis on the simulations. For the bi-antennary glycans, the main clusters allowed us to identify some of the conformational states described in previous studies 12 . Among those conformations described at the time, we mainly found the "broken wing" (α 1-6 antenna along the inner-core, Fig. 1Aa), and the "bird" (α 1-6 antenna perpendicular to the inner-core, Fig. 1Ab). However, the conformational state proportions varied for each glycan chain (Fig. 1C). Indeed, for the disialylated monofucosylated bi-antennary glycan (Ng-c2Sf), the "broken wing" conformation was observed during 70% of the 3 simulations, and the "bird" conformation for 25% of the simulations. The 5% remaining corresponded to intermediate or other conformational states.
Without SA, the "broken wing" conformation was decreased from 70% to 36%. In parallel, the "bird" conformation increased from 25% to 53%. Moreover, a third conformation also described in the past emerged: this one is named the "back-folded" conformation (α 1-6 antenna folded behind the inner-core. Fig. 1Bc). This conformational state represented only 3% of the simulation. The same experiment was also performed on disialylated bi-antennary glycan (Ng-c2S). Here, the removal of SA highly decreased the "broken wing" conformation from 49% to 29% and the "bird" conformation was increased from 43% to 59% (Fig. 1Cb). These data suggest that SA influence the distribution of each conformational states during the simulation. This hypothesis was supported by the analysis of the contact map of both sialylated and non-sialylated chains (Fig. 2). The Gromacs g_mdmat tool was used to visualize the mean distance between each block (each residues). The comparison between maps with or without SA allowed us to estimate the growing gap between each block following the desialylation process. Thus, we show that the desialylation of the glycan chain increases the distance between each block.
Interestingly, when SA were removed from the bi-antennary chains, the clustering process was more difficult as the number of clusters was increased for the same cut-off value (0.3 nm). Indeed, the Ng-c2Sf counted an average of 20 clusters. The suppression of SA (Ng-c2f) carried this number up to 32 clusters (from 26 to 38 clusters for Ng-c2S and Ng-c2). In the same way, the number of intermediate structures increased with the non-sialylated form: from 5% of intermediate structures to 9%. In parallel we measured the root mean square fluctuation (RMSF) of galactoses as an indicator of the antenna mobility (Fig. 3). We measured a RMSF value of 0.66 and 0.67 nm for both antennas with SA. In the absence of SA, the RMSF was increased for the α 1-6 antenna from 0.67 to 0.81 nm (Fig. 3A). The removal of SA did not significantly change the RMSF value of α 1-3 antenna but it slightly decreased on the α 1-6 antenna on the Ng-c2S chain (from 0.78 to 0.75 nm, Fig. 3B). Finally, we estimated the number of transitions between the "bird" and the "broken wing" conformation during the 3 simulations. We counted 16 transitions for Ng-c2ASf and 33 for Ng-c2AS. When SA were removed, the number of transitions increased to 47 and 61 transitions for Ng-c2f and Ng-c2, respectively. Those results suggest that while glycans can adopt preferential conformation states, they remain flexible and mobile structures. Moreover, SA seem to be able to influence both of these two aspects of the glycan, independently of the presence or the absence of fucose.
blue lines).
Although each angle displays different value, we can characterize those dihedral angles by their distribution profile. On Ng-c2Sf, the GlcNAc5(β 1-2)Man4 glycosidic bond displayed an unimodal distribution around + 160° for both ϕ and ψ angles (Fig. 4A), suggesting that this glycosidic bond is not an important flexibility point of the chain and remained particularly stable during the whole simulation. On the contrary, the ϕ Man4(α 1-3)Man3 angle showed 2 major peaks at + 90° and + 170°, but was able take a large range of values from + 60° to + 180° (Fig. 4B). The ψ Man4′ (α 1-6)Man3 angle displayed a bimodal distribution, with two angles explored: + 70° for 79% of the simulation and + 180° for the 21% remaining (Fig. 4C). Those linkages exhibited an increased flexibility, either by having a large range of explored angles, or by displaying a plurimodal distribution.
As described in Table S1 and Fig. 4, in non-sialylated glycan configuration (red lines and surfaces), the glycosidic linkages belonging to the inner-core (blocks 1, 2, and 3, e.g. Fig. 4D) remained almost identical after the removal of SA. In contrast, the desialylation process caused some modifications on Fuc1′ (α 1-6)GlcNAc1 and Gal6(β 1-4)GlcNAc5 angles (Fig. 4E,F). Nevertheless, the position of those angles on the chain (mostly at the Scientific RepoRts | 6:35666 | DOI: 10.1038/srep35666 end part) limited the impact of their changes towards the overall glycan conformation. The strongest variations were observed on the ϕ Man4(α 1-3)Man3 dihedral angle: the distribution of the two peaks at + 90° (35%) and + 170° (65%) was inverted so that the + 90° angle represents 52% of the simulation, and the + 170° angle represents the 48% remaining (Fig. 4B). In the same way, ψ angle from the Man4′ (α 1-6)Man3 glycosidic linkage displayed bimodal distribution and saw the respective representativeness of each peak modified after the removal of SA (Fig. 4C). The same experiment was performed on the Ng-c2S and returned comparable results on antennas flexibility. However, the absence of the fucose seemed to allow more motions on the inner-core in comparison with Ng-c2Sf as the angles displayed a greater displacement or distribution modification. Taken together, these results suggest that the desialylation process is more likely to modify dihedral angles belonging to the antennas or being involved in the antenna linkage to the inner-core, such as Man4(α 1-3)Man3 or Man4′ (α 1-6)Man3 glycosidic bonds.
New visualization of the mobility of bi-antennary chains: "umbrella visualization". To correlate the changes observed on the non-sialylated glycan chain with a potential impact on the protein accessibility, we developed a new method to visualize our results: we called this representation the "umbrella visualization". As described in the materials and methods section, this representation allowed us to estimate the protein surface that would be shadowed or covered by each antenna of the glycan chain. On disialylated monofucosylated bi-antennary glycan (Fig. 5A), the first antenna mainly covered one spot located at 1.06 nm from the center of the inner-core (GlcNAc1; GlcNAc2; Man3) at coordinates (0.7, − 0.8). This position was covered in 63% of the simulation, the 37% remaining were spread between the (0.7, − 0.8) spot and the inner-core. Meanwhile, the second antenna was able to cover 2 very distinct positions. The first one was the
Ng-c2Sf
Ng-c2f most important as it represented 61% of the total simulation (− 0.4, − 0.3) and was located very close from the inner-core of the glycan chain, at 0.50 nm. On the contrary, the second spot was located far from the inner-core (− 1.2, − 0.7), at 1.39 nm and represented 39% of the simulation.
After the removal of SA (Fig. 5B), on the first antenna, the spot located at (0.7, − 0.8) was still present. Though, it became less representative with only 31% of the simulation (63% when SA are present). Moreover, the others positions (69%) were widely spread from the original position to the inner-core in a square of 1.5 nm side. The second antenna also displayed changes in its distribution. The two spots previously described were still present but their respective representativeness changed. The first one was located close from the inner-core (− 0.4, − 0.3) and its proportion decreased from 61% to 35% after desialylation. At the same time, the second spot, far from the inner-core (− 1.2, − 0.7), increased from 39% to 61% when SA are removed. Finally, a third spot appeared next to the inner-core (0.50 nm), at coordinates (− 0.3, 0.4). This spot represented only 4% of the total simulation. In parallel, we showed that the distance between the end of each antenna and the inner-core could be correlated with only a few specific dihedral angles. Indeed, the α 1-6 antenna profile could be associated with the ψ Man4′ (α 1-6)Man3 dihedral angle with a Spearman's rank correlation coefficient of 0.60 (Ng-c2Sf) and 0.76 (Ng-c2f). And the α 1-3 antenna profile was correlated with the ϕ Man4(α 1-3)Man3 dihedral angle with a coefficient of 0.84 (Ng-c2Sf) and 0.87 (Ng-c2f). When merged, those results show that removing SA increases the explored area by both antennas of the glycan chain. Moreover, only a few glycosidic bonds seem to be involved in this process. This suggest that the removal of SA from the glycan chain causes modification of the coverage profile and could impact the protein accessibility.
Sialylated and non-sialylated tri-antennary glycan chains display different conformations.
Among the important varieties of existing N-glycan, the bi-antennary model is the most studied 12,15 . However, N-glycan can also be present in a tri-antennary form at the surface of human protein 16,17 . As a consequence, we decided to extend our study to the trisialylated monofucosylated tri-antennary glycan (Ng-c3Sf). The clustering process applied on this structure allowed us to find one major conformational family representing 68% of the simulation (Fig. 6A). On this conformation, the new added antenna (GlcNAc5″ , Gal6″ , and NeuAc7″ ) folded along the inner-core, reminding the "broken wing" conformational state previously described on bi-antennary chains ( (Fig. 1Bc). Interestingly, a rotation around the Man4′ (α 1-6)Man3 linkage was also able to invert the position of the second (GlcNAc5′ , Gal6′ ) and the third antenna (11% of the simulation Fig. 6B). The position of the glycan was then highly stackable with the bi-antennary equivalent conformation. When SA was removed, this predominant conformational state became even more representative of the simulation (87%). This stabilization of the glycan conformation was also confirm by the measurement of the RMSF. The removal of SA decreased the RMSF from 0.60 nm to 0.53 nm (2 nd antenna, Gal6′ ) and from 0.84 nm to 0.57 nm (3 rd antenna, Gal6″ ). The absence or the presence of the fucose on the inner-core did not deeply modify the global arrangement of the glycan. However, this residue limited the rotation around the Man4′ (α 1-6)Man3 by blocking the end of the second and the third antennas. Those results suggest that, despite the addition of a new antenna, similar structures can be found between bi-and tri-antennary glycan chains. Moreover, as observed on bi-antennary glycan, the removal of SA does impact the global arrangement of those structures.
"Umbrella visualization" of tri-antennary chains. The "umbrella visualization" of the first antenna gave a similar profile to the one characterizing the bi-antennary chain. This antenna explored a wide area from 0 to around 1 nm distance from the inner-core, regardless of the presence or the absence of SA (Fig. 7B). The second antenna explored the same spot located at (− 1.3, − 0.5) for most of the simulation (90%). This antenna was also able to explore another area closer from the inner-core, at (− 0.3, − 0.4) for about 10% of the simulation. Finally, the third antenna was located at various positions around the inner core. Two of these positions correspond to short distances from the origin: at coordinates (0.3, 0.7) for 21% and at coordinates (− 0.4, − 0.3) for 41% of the simulation. The two other spots are situated further from the inner-core and correspond to 27 and 11% of the simulation. When SA were removed, this antenna mostly explored (77% of the simulation) the spot located at (− 0.4, − 0.3). According to those results, it appears that the desialylation process impacts the tri-antennary glycan chain in a different way than the bi-antennary chain. Here, only the third antenna seems to be deeply impacted by the removal of SA.
Discussion
Many studies have examined the roles of N-glycosylation on the stability and on the structure of proteins 18,19 . Some of these N-glycosylation chains exhibit sialic acids on their terminal portion that may be cleaved by sialidases, also named neuraminidases, thus leading to the disruption of the functionality of the protein 3,5 . Indeed, SA are acidic monosaccharides typically found at the outermost ends of the sugar chains of animal glycoconjugates. Even though they are involved in the intermolecular and intercellular interactions, they also act as critical components of ligands recognized by a variety of proteins of animal, plant, and microbial origin (sialic acid binding lectins) [20][21][22] . Recognition can be affected by: specific structural variations and modifications of SA, their linkage to the underlying sugar chain, the structure of these chains, and the nature of the glycoconjugate to which they are attached. The biological studies show that the desialylation induced by neuraminidases, alters the function of glycoproteins. In this study, we show for the first time with molecular dynamics simulations the structural consequences of the desialylation of N-glycan chains. To achieve this goal, we performed 1.5 μs extensive simulations at 310 K. This important sampling was long enough to analyze both the flexibility of the glycan with or without SA, and its capacity to adopt preferential conformations. In presence of SA, bi-antennary chains can be classified either in the "back-folded", "bird", or "broken wing" group, this last being the most representative of each simulation, according to Mazurier J et al. 23 . While the global arrangement of common blocks is not changed, the desialylation process modifies the representativeness of each arrangement. The "broken wing" conformation becomes less representative to the profit of the "bird" or of the back-folded conformation. In the "broken wing" conformational state, the interaction between the two SA is able to lock and stabilize the arrangement. Thus, by removing SA, we allow the chain to open more widely the antennas (e.g. "bird" conformation). This "opening" process is also visible with the contact map in which the removal of SA increases the gap between blocks from each other's ( Fig. 2) and reflects the fact that SA interact strongly with the "trunk" of the glycan (constituted by the following blocks: GlcNAc 1 -GlcNAc 2 -Man 3). Indeed, we observe shorter distances between the GlcNac 1 block and the Gal 6′ block upon presence of the sialic acids. Given the nature of the sialic acid, these interactions are mainly stabilized through hydrogen bonds which tend to fold one arm of the umbrella against the "trunk". The second consequence of this interaction is that the "folded arm" loses its flexibility: this is directly observable through the decrease of the Gal 6′ RMSF upon sialylation. Moreover, in bi-antennary chain without SA, the clustering process is more difficult than in N-glycan chain with SA. The intermediate structures obtained and the measurement of RMSF are two other arguments of the SA role in the stability of N-glycan chains.
The mobility and the ability of the glycan to adopt a particular conformation mostly depends on the configuration of the glycosidic linkages between each block 13 N-glycans are post-translational structures essential for the functionality of the associated protein, protein-protein interactions or cell-cell interactions. This underlines the importance to be able to visualize the protein covered area. The cluster identifications provide an easy way to observe glycan arrangements, but they only give a static analysis of the whole simulation and do not allow to appreciate the dynamic aspect on the investigated glycan chains. Conversely, the measurements of dihedral angles and RMSF provide a better description of the different motions, but the visualization of the results is less intuitive and remains difficult. Thus, we decided to consider the glycans as an opened umbrella where the antennas, with or without SA, are the whales. Indeed, such a structure should prevent the interaction between the protein of interest and other partners. As an example, the modifications of the electrostatic properties of the protein or the steric hindrance could prevent the protein from interacting with unwanted partners such as proteases or pathogens 24,25 . Therefore, we present, for the first time, a new representation of glycan chains taking into account both, the main positions adopted by each antenna of the glycan and the intensity of their motions. The "umbrella visualization" is based on the shade of the glycan chains projected on a plan, thus mimicking the protein shadowed surface by the antennas. This system allows us to discuss about both the flexibility of the glycan (i.e. its ability to explore very distinct areas), and the stability of the glycan (i.e. its ability to avoid spreading from the main conformational state). Indeed, the visualization of overlapping areas on the plan shows that among the two antennas, the α 1-6 antenna is more flexible because it can explore several distinct conformational states. But this antenna is also more stable as it does not spread far from these positions. Conversely, as observed with the "umbrella visualization", the α 1-3 antenna can only take one average position, but stays in motion during the simulation and explore the space around this position. The "umbrella visualization" also allows us to corroborate the results obtained with the clustering process and with the dihedral angle measurement. Indeed, the profile obtained with the α 1-6 antenna displays 2 spots located either close or far from the inner-core. Both those locations can be correlated with the two main clusters obtained in Fig. 1. The nearest spot corresponds to the "broken wing" conformational state, where the antenna is folded along the inner-core, and the second spot finds its origins with the "bird" conformations where the antenna is moved far from the inner-core. Finally, the desialylation process causes the emergence of a third spot near the inner-core, which corresponds to the "back-folded" conformation. We were also able to establish correlations between the distance of the antenna's end from the inner-core and dihedral angle measurement. Interestingly, we show that the motion of the antenna can be correlated with only a few dihedral angle. The α 1-6 antenna profile can be associated with the ψ Man4′ (α 1-6)Man3 dihedral angle and the α 1-3 antenna profile is correlated with the ϕ Man4(α 1-3)Man3 dihedral angle. This set of results shows that the removal of SA from the bi-antennary glycan chain is likely to notoriously modify the interaction between the glycan and the protein and the covered surface.
A large number of glycan structures has already been identified. They have been divided in three groups: complex, mannose and hybrid type. For each group, the glycan will vary in length, composition, and number of antennas. From the past decades, the monofucosylated disialylated bi-antennary glycan has been the most studied type of glycosylation 26 . Nevertheless, several proteins such as human immunoglobulin G present the fucosylated tri-antennary glycan form 27,28 . To our knowledge, our work is the first to report the role of SA on tri-antennary glycan using extensive molecular dynamics simulation (1.5 μ s). Interestingly, the clustering process shows that, with SA, the conformations and dihedral angles adopted by tri-antennary glycans are similar to those presented by bi-antennary glycans. The slight differences observed might be due to the glycosidic bonds involved in antennas linkage. In contrast, the similarity observed between bi-antennary and tri-antennary chains associated with SA, is lost when SA are removed. On the bi-antennary glycan, the desialylation process causes an "opening" of the structure by promoting the "bird" conformation instead of the "broken wing" conformation, and increase its mobility. Meanwhile, the removal of SA from tri-antennary glycan generates a change in the average conformation of the new added antenna and decreases its mobility. In conclusion, we show for the first time that the removal of SA from the terminal position of each antenna can lead to various modifications of the glycan behavior, depending on the studied model. Nevertheless, the final consequences remain identical in both bi-antennary and tri-antennary glycans: the covered area provided by the glycan chain on a hypothetical protein surface is modified. As mentioned previously, the changes observed in the structural and dynamical behavior of the glycan chains upon desialylation originate from the loss of interactions between the trunk and one of the arm, thus releasing it. Although, we focused this study on the impact of the desialylation on isolated glycan chains, we believe that it is the first step towards the understanding of the influence of SA on the structure and functions of proteins at the atomic and molecular level. Indeed, the desialylation process leads to the modification of the protected surface of the protein, and thus the protein/glycan interaction. The stability of N-glycan chain structure could be an important element to take in account in the immobilization process of proteins. For example, at the surface of sero-transferrines, the glycan chains present a "broken wing" arrangement: this conformation reinforces the association of the two lobes and contributes to maintain the protein moieties in a biologically active 3D conformation 29 . Nevertheless, no data show the consequence of sialic acid presence on this protein structure. Thus, the mobility of the N-glycan observed without SA could destabilize the structure of the protein or the interaction with other partners and explain why several proteins without SA exhibit an inhibition of their functions such as EGFR and IGFR 5,30 .
Methods
Starting structure. All structures are built using the Avogadro software. Each block of the glycan chain is separately built and submitted to energy minimization steps. The chain is then assembled block by block with energy minimization at each step until the full glycan structure is obtained. Modified version of the OPLS-AA force field is used to describe the atoms 31,32 . This version has been adapted for our study and describes all the atoms used in our glycan chains models (Table S3). The list of structures and their respective abbreviations is summarized in Table 1.
Simulation. Molecular dynamic simulations are performed at ROMEO HPC Center, using the GROMACS package 4.6.3 33 . Prior to simulations, each system is submitted to multiple preparation steps. The systems are first minimized in vacuum by 2,500 steps to remove eventual steric clashes (steepest descent energy minimization). Periodic boundary conditions are then applied by generating a cubic box around the structures. This box is then filled with TIP3P explicit water model 34,35 , followed by 2,500 steps of energy minimization in solvent. Finally, 500 ps of an NPT molecular dynamics equilibration are performed to bring the system to the target temperature and pressure of 310 K and 1 bar, respectively. Table S4 summarizes the parameters used for NPT molecular dynamics simulations. 3 independent simulations of 500 ns with different starting points were performed for each system, leading to the total calculation time of 1.5 μ s with a 2 fs integration time step 36,37 . No counter ions or excess salt were needed. Atomic coordinates are recorded every picosecond and the LINCS algorithm 38 is used to constrain the bonds with a hydrogen atom. Each system is simulated at the physiological temperature of 310 K. Trajectory analysis. Analysis are performed on trajectory files with a temporal resolution of 10 picoseconds.
Clustering. The GROMACS g_cluster tool is used to determine most representative conformational states of the glycan chain. Hydrogen atoms are ignored and the clustering gromos method is used during the analysis. We choose the smallest cut-off allowing to classify 90% of the simulation in the first 5 clusters for the sialylated glycan chain.
RMSF. The average position of each atom during each 500 ns simulation is calculated to generate the reference structure needed for a Root Mean Square Fluctuation computation. As the galactose is the last common block for both sialylated and non-sialylated antennas, we measure the RMSF from its center of mass as an indicator of antennae's mobility. To perform a statistical analysis, the global RMSF is calculated using the block averaging method with windows of 10 ns. Thus, 150 RMSF values are generated for each structure. In order to compare sialylated and non-sialylated chains, a t-test is performed with a statistical significance (*p < 0.05).
Contact map. The GROMACS g_mdmat tool is used to generate contact map between each block of the glycan (smallest average distance between 2 residues). One map is generated for the complete glycan chain, and another one is created for the glycan chain lacking sialic acids. Finally, a third map is created by subtracting the second map to the first map in order to evaluate the growing gap or the rapprochement of each block.
Dihedral angle. The GROMACS g_angle tool is used to measure dihedral angles between blocks. ϕ , ψ , and ω angles are measured between each blocks.
"Umbrella visualization". With the aim to appreciate the covered zone explored by glycan chains on a hypothetic protein, we project the position of each antennae on an oriented xy plan (Supplementary Figure S1). The glycan chain is placed on a xyz coordinate system in such a way that the asparagine residue is set on the origin. With an in house program, we calculate the angle between the z axis and the vector given by the inner-core (blocks 1, 2 and 3: GlcNAc, GlcNAc, Man) of the glycan chain, so the inner-core can be oriented along the z axis. The chain is then oriented around the z axis to keep each antennae on a defined side: the angle between the x axis and the vector given by blocks 4, 3 and 4′ (Man, Man, Man) is calculated and the chain is rotated so that this vector becomes coplanar with the xz plan. Finally, the xy positions of the last common blocks (Gal) for both sialylated and non-sialylated antennas are reported on a new graph.
Statistical analysis. The Spearman's rank correlation coefficient is used to estimate correlations between the antennas distance from the inner-core and each dihedral angles of glycosidic linkages. At each one of the 150,000 time steps of the simulation, the distance between the end of the antenna and the inner-core, and the value of the dihedral angles of a glycosidic linkage are read and reported in a table. The Spearman's rank correlation coefficient is then calculated with p < 0.001.
Visualization. All visualizations are produced using Visual Molecular Dynamics (VMD) 39 with tachyon rendering mode 40 .
|
v3-fos-license
|
2016-03-22T00:56:01.885Z
|
2010-12-01T00:00:00.000
|
15000952
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6694/2/4/2100/pdf",
"pdf_hash": "d7a8ea7ab7e5758734b0e3fae238282e77a49416",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46463",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d7a8ea7ab7e5758734b0e3fae238282e77a49416",
"year": 2010
}
|
pes2o/s2orc
|
Treatment of Brain Metastasis from Lung Cancer
Brain metastases are not only the most common intracranial neoplasm in adults but also very prevalent in patients with lung cancer. Patients have been grouped into different classes based on the presence of prognostic factors such as control of the primary tumor, functional performance status, age, and number of brain metastases. Patients with good prognosis may benefit from more aggressive treatment because of the potential for prolonged survival for some of them. In this review, we will comprehensively discuss the therapeutic options for treating brain metastases, which arise mostly from a lung cancer primary. In particular, we will focus on the patient selection for combined modality treatment of brain metastases, such as surgical resection or stereotactic radiosurgery (SRS) combined with whole brain irradiation; the use of radiosensitizers; and the neurocognitive deficits after whole brain irradiation with or without SRS. The benefit of prophylactic cranial irradiation (PCI) and its potentially associated neuro-toxicity for both small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) are also discussed, along with the combined treatment of intrathoracic primary disease and solitary brain metastasis. The roles of SRS to the surgical bed, fractionated stereotactic radiotherapy, WBRT with an integrated boost to the gross brain metastases, as well as combining WBRT with epidermal growth factor receptor (EGFR) inhibitors, are explored as well.
Introduction
Brain metastases are the most common intracranial neoplasm, occurring in 8-10% of cancer patients, and are a significant cause of cancer-related morbidity and mortality worldwide [1,2] The incidence of brain metastases is rising with an annual incidence of approximately 170,000 to 200,000 in the United States [3]. This is caused by a combination of factors including the improved therapeutic efficacy of current cancer treatments, which leads to longer survival, such as the addition of bevacizumab to chemotherapy as the first-line treatment of metastatic non-small cell lung cancer; and failure in a potential systemic therapy sanctuary site, or more frequent brain surveillance for specific cancers that have a predilection for brain metastases; and improvements in modern imaging technology, which lead to the diagnosis of brain metastases at an earlier stage [4,5]. However, such an increase in the incidence of brain metastases in recent years has not been observed in all studies, and may possibly be attributed to under-diagnosis in earlier years [1,6]. The most common origins of brain metastasis include primary cancers of the lung, breast, skin (melanoma), and the GI tract. Among these, primary tumors in the lung are the most common cause of brain metastases, as up to 65% of patients with lung cancer will ultimately develop brain metastases [7].
As the leading cause of cancer mortality, and the most prevalent cancer in men, lung cancer accounted for an estimated 161,840 deaths in the United States with an incidence of 215,020 in 2008. Furthermore, approximately 1.35 million cases were diagnosed worldwide with 1.18 million deaths in 2002 [8]. Therefore, brain metastasis is a very important problem in the overall management of lung cancer. Among the various histologies, small cell lung cancer (SCLC) is the most likely to metastasize to the brain with an 80% probability of brain metastasis after two years from diagnosis [6]. Brain metastases develop in approximately 30% of patients with non-small cell lung cancer (NSCLC) [9]. Among the various histologies of NSCLC, the relative frequency of brain metastases in patients with adenocarcinoma and large cell carcinoma was much higher than that in patients with squamous cell carcinoma [10,11].
Most patients present with significant neurological signs and symptoms that are related to the location and extent of brain involvement. They include both focal neurological changes and general symptoms secondary to increased intracranial pressure [12]. Major clinical presentations are listed in Table 1 [13]. Contrast-enhanced MRI is the preferred imaging study for the diagnosis of brain metastases over non-enhanced MRI or computed tomography (CT) scans in detecting cerebral metastases and in differentiating metastases from other central nervous system (CNS) lesions [14,15]. The recommended pregadolinium studies include T2-weighted and T1-weighted sequences, and the recommended postgadolinium studies include the T1-weighted and fluid-attenuated inversion-recovery (FLAIR) sequences [5]. Thinner axial slices without skips may be necessary to detect the smallest lesions. If the diagnosis is still in doubt, biopsy should be considered. Brain metastases are usually found at the junction of the grey and white matters, with circumscribed margins and large amounts of vasogenic edema relative to the size of the lesion. Furthermore, they usually present as multiple lesions as a result of a lung primary [16].
Without treatment, the median survival of patients is 4-7 weeks [17][18][19]. The treatment can usually be divided into symptomatic and therapeutic strategies. Symptomatic relief is most commonly achieved with corticosteroids to reduce peritumoral edema and anticonvulsants to prevent recurrent seizures. Systemic steroids alone improve neurological function and prolong survival to approximately two months [20]. Whole brain radiotherapy (WBRT), as the primary treatment approach for brain metastases, improves neurological function and prolongs median survival to three to five months [12]. Due to the poor survival outcomes associated with brain metastases, more aggressive treatments for patients have been sought and investigated. In general, the therapeutic approach largely depends on the number and location of metastases, as well as the extent of extra-cranial tumor involvement. In the following sections, prognostic factors that may influence treatment selection and the various treatment approaches will be reviewed.
Prognostic Factors
A retrospective recursive partitioning analysis (RPA) was performed based on three consecutive Radiation Therapy Oncology Group (RTOG) trials, which included approximately 1200 patients with brain metastases [21]. Three prognostic classes (RPA class I, II and III) were found to be associated with the overall survival of patients with brain metastases. This classification scheme is based on age at diagnosis, presence of extracranial disease, Karnofsky performance status (KPS), and the status of the primary cancer. RPA class I includes patients who are younger than 65 years of age, have a KPS score of ≥70, tumor controlled at the primary site, and no extracranial disease. RPA class III patients have a KPS score of less than 70. All other patients are in RPA class II. The median survival times for the RPA classes I-III were 7.1, 4.2, and 2.3 months, respectively. This RPA classification is the most commonly used prognostic system for brain metastases, with further validation in Phase III and major institutional studies for both NSCLC and SCLC [22][23][24][25]. Despite the common adaptation of RPA classification, clinicians are still faced with the dilemma of tailoring treatments to individual patients because factors such as the number or volume of brain metastases were not included in the RPA initially, estimation of systemic disease was not consistently reliable, etc. As newer data came out, a new prognostic index, the graded prognostic assessment (GPA), was generated based on data from five randomized RTOG studies involving brain metastases [26]. Please refer to Table 2 for details of the GPA scoring system. The median survival times according to GPA score were: GPA 0-1, 2.6 months; GPA 1.5-2.5, 3.8 months; GPA 3, 6.9 months; and GPA 3.5-4.0, 11.0 months (p < 0.05). The GPA prognostic index was further validated based on specific diagnosis at the primary site due to the heterogeneous response of brain metastases to various treatment approaches based on histology and the various patterns of systemic disease and response to systemic therapy for different types of primary tumor [27]. For both NSCLC and SCLC, all four prognostic factors remained significant, confirming the prognostic value of the original GPA for lung cancer.
Symptomatic Management
The management of symptoms from brain metastases has primarily consisted of the usage of corticosteroids (e.g., dexamethasone or methylprednisolone) and anticonvulsants. Corticosteroids are given upon initial diagnosis to relieve the symptoms associated with peritumoral edema in approximately two-thirds of patients because of their ability to reduce the permeability of tumor capillaries [28,29]. In a study by Vecht et al., doses of 8 versus 16 mg/day with tapering schedules over four weeks and doses of 4 versus 16 mg/day with continuation of these doses for 28 days prior to tapering demonstrated similar KPS improvements at seven days (54% to 70%) and 28 days (50% to 81%) in patients treated with WBRT and concurrent ranitidine [30]. However, patients in the 4 mg/day arm experienced a higher rate of drug reinstitution than in patients treated with 8 or 16 mg/day. Furthermore, the greatest KPS improvement was observed in patients in the 16 mg/day arm when this dose was tapered over four weeks. These findings suggest that greater KPS improvement arose from the maximal anti-inflammatory effects of the initial higher doses, while the late toxicity associated with corticosteroids was minimized with gradual tapering. A commonly used dexamethasone regimen in patients with brain metastases is a 10-mg intravenous (IV) bolus, followed by 4 to 6 mg PO every six to eight hours before gradual tapering with caution. However, initial corticosteroid use may be reserved for symptomatic patients owing to the common side effects of dexamethasone, including hyperglycemia, peripheral edema, psychiatric disorder, oropharyngeal candidiasis, Cushing's syndrome, muscular weakness, and pulmonary embolism [31].
Approximately 15% of patients with brain metastases present with seizures, and seizure is frequently associated with supratentorial lesions. Seizures can be managed with antiseizure medications, but anticonvulsants are generally not given prophylactically. In a prospectively randomized study by Forsyth et al. [32], one hundred patients with newly diagnosed brain tumors were randomized to prophylactic anticonvulsants or no anticonvulsants. After a median follow up of 5.44 months, no difference in the rates of seizures at three months or seizure-free survival were observed, suggesting that antiseizure prophylaxis in brain tumor patients is not necessary.
Whole Brain Radiotherapy
The palliative effects of WBRT for brain metastases were appreciated over half a century ago, and are widely accepted to extend the median survival of patients to three to six months, compared to one to two months without treatment [5]. Thus, WBRT continues to be the standard of care for patients with brain metastases, especially metastases from lung cancer. Multiple randomized studies have been conducted since the early 1970s to determine the optimal dose and fractionation of WBRT. Selected studies are summarized below in Table 3. (1971)(1972)(1973) Second study (1973)(1974)(1975)(1976) Over 50% of metastases in these studies were of lung origin. Various dose fractionation schedules were studied with no difference in any clinical outcome (i.e., survival times, symptomatic response rates, duration of symptomatic response). However, the ultrarapid schedules of 10 Gy in one fraction and 12 Gy in two fractions were shown to be associated with shorter remission periods, less time to progression of neurologic symptoms, and lower rate of complete disappearance of neurologic symptoms in the RTOG trials [34]. This suggests better palliative effects from the more prolonged schedules. Although a slight survival advantage may be seen with the 30 Gy/10 fractions regimen over the 12 Gy/2 fractions regimen, this is confined to patients with a good initial response [37]. Therefore, the dose fractionation schedule should be chosen based on patients' prognosis, and the more prolonged dose fractionation schedules should be used for patients who are expected to live long enough to experience neurologic progression as well as the late radiation toxicity associated with large fraction sizes [38].
In the assessment of tumor response, a thorough imaging study of dose response based on tumor size and histology in 108 patients with 336 measurable lesions after WBRT (30 Gy in 10 fractions) was performed by Nieder et al. [39]. An overall response rate of 59% was observed at up to three months. Complete response rate by tumor type was 37% for SCLC, 25% for squamous cell carcinoma, and 14% for non-breast adenocarcinoma. An improved response rate was observed for smaller tumors without necrosis. In a separate study, the biologically effective dose (BED) was generated to compare different dose fractionation schedules by Nieder et al. [40]. Increasing BED was found to correlate with increased partial remission based on tumor size.
Surgery
In clinical practice, surgical resection is indicated for histological confirmation of diagnosis when the diagnosis is in doubt, and for immediate relief of neurological symptoms due to increased intracranial pressure [12]. Resection of a single brain metastasis has become a standard treatment option after the publication of several prospective studies evaluating the role of surgery combined with WBRT in the treatment of brain metastases [41,42]. In a prospective study of 48 patients by Patchell et al., patients were randomly assigned to surgical removal of the brain tumor followed by radiotherapy or needle biopsy and radiotherapy [41]. Patients began WBRT 36 Gy/12 fractions within 14 days after surgery, whereas patients in the WBRT alone arm began radiotherapy within 48 hours of biopsy or study entry. The recurrence rates at the site of original metastasis for the surgery arm and the WBRT alone arm were 20% and 52%, respectively. The length of time from treatment to the recurrence of the original brain metastasis was significantly shorter for the WBRT alone arm than the surgical arm (median 21 versus >59 weeks, p < 0.0001). The median survival after surgery and adjuvant WBRT was much longer at 40 weeks versus 15 weeks with WBRT alone (p < 0.01). In addition, the patients in the surgical group maintained functional independence (KPS score of ≥70) much longer than the patients treated with radiation alone (median, 38 weeks versus 8 weeks, p < 0.005). The results from this study were confirmed in another study by Noordijk et al. [42], which demonstrated a median survival advantage with the addition of surgery (10 versus 6 months, p = 0.04). This survival advantage was most pronounced in patients with stable extracranial disease and patients ≤60 years old. In contrast, a study of 84 patients by Mintz et al. failed to demonstrate any survival advantage with surgery plus radiation [43]. This is most likely due to the fact that a significant proportion of the patients enrolled presented with active systemic disease and lower functional performance scores compared with the other two studies. The results from all three studies suggest that patients with a single brain metastasis and positive prognostic features, such as the control of extracranial disease and young age, will benefit more from surgical resection followed by WBRT compared with WBRT alone.
Surgery is usually limited to the dominant, symptomatic lesion in patients with multiple metastases. Surgery combined with adjuvant WBRT or stereotactic radiosurgery (SRS) have demonstrated similar survival outcome in patients with multiple lesions compared with patients with single brain metastasis in several single-institution studies [44][45][46]. Furthermore, the survival outcome from resection of all lesions approaches that from resection of a single lesion as shown by Bindal et al. [46]. Modern 30-day surgical mortality rates after resection of a single metastasis range from 0% to 10%. Surgical morbidity includes neurologic deficits (0% to 13%) and non-neurologic complications (0% to 20%) such as thromboembolism, and wound infections [5].
In a separate study, the benefit of adding WBRT after complete surgical resection of a single lesion (based on MRI at 2-5 days after surgery) was investigated by Patchell et al. [47]. The overall median follow up was 43 weeks in the observation group and 48 weeks in the radiation group. Postoperative WBRT was found to have superior local control (90% versus 54%; p < 0.001), distant intracranial control (86% versus 63%; p < 0.01), and overall intracranial control (82% versus 30%; p < 0.001) rates when compared with those who underwent surgical resection alone. However, no overall survival benefit was seen, despite the fact that patients who underwent WBRT were less likely to die from neurological causes than patients in the observation group (14% versus 44%; p = 0.003). The results of this study have recently been confirmed by a randomized Phase III study in Europe, EORTC 22952-26001 [48]. In this study, 359 patients were enrolled with non-progressing primary tumors that had metastasized to the brain. For all patients, brain metastases were initially treated with surgery or radiosurgery. Subsequently, the patients were randomized to prophylactic WBRT or observation. The median survival was 9.5 versus 10 months, respectively. Overall survival was 10.7 versus 10.9 months, respectively. However, WBRT was associated with superior progression-free survival (PFS), intracranial control, and fewer neurologic deaths. This may suggest an overall improvement in patients' quality of life when WBRT is added to surgical resection.
Stereotactic Radiosurgery (SRS) with and without WBRT
SRS is a noninvasive technique that delivers a high dose of radiation to a precisely defined target volume through multiple coplanar or non-coplanar intersecting beams, as well as rotational arcs. This approach allows the center of the target to receive a very high dose relative to the surrounding normal brain tissue, as the intersecting beams achieve a very sharp dose gradient (dose fall-off). In recent years, SRS has emerged as an effective alternative to surgery for up to four small brain metastases. This is mainly due to the pseudospherical shape, well-defined margin, and the relatively small size of brain metastases at presentation [49]. As the lesions increase in size, the dose fall-off becomes less rapid, thus increasing the dose to the volume of normal brain immediately adjacent to the tumor. This increases the risk of edema and radiation necrosis, which may require surgery six months or longer after SRS. As a result, SRS is typically delivered to small lesions up to 4 cm in size [50]. The maximum tolerated dose (MTD) was determined in an RTOG Phase I dose escalation trial, RTOG 90-05, based on tumor size [51]. In this study, the MTD was found to be 24 Gy for lesions ≤2 cm, 18 Gy for lesions 2.1-3 cm, and 15 Gy for lesions 3.1-4 cm in maximum diameter. Each dose level was associated with incidence rates of grade 3-5 CNS toxicity of 14%, 20%, and 8%, respectively. The biological effects of SRS on tumors are largely unknown. However, recent studies have suggested the involvement of endothelial cell apoptosis, microvascular dysfunction, or the induction of a T-cell response against the tumor in addition to the radiation-induced DNA damage [52-54]. Among various studies, patients with good functional performance status, no active systemic disease, and longer time from the diagnosis of primary lung cancer often had better prognosis and lived longer ( Table 4). Because of the excellent local control rates achieved by SRS, whether its addition to WBRT will lead to a survival benefit over WBRT alone has been investigated in many studies. This approach could be especially beneficial for patients who are not candidates for a craniotomy because of tumor location or existing medical contraindications. Three randomized studies have evaluated the efficacy of WBRT alone versus WBRT + SRS. Most patients had lung tumor histology in two of the published studies [61,62]. In the first randomized study by Kondziolka et al. [61], the local control rate at one year was found to be significantly better when SRS was added to WBRT in a small number of patients (92% versus 0%, p = 0.0016). However, no survival benefit was found with the addition of SRS. This study defined local recurrence as any increase in lesion size on MRI rather than the more usually employed RECIST (revised response evaluation criteria in solid tumors) system. In addition, this study was not controlled for corticosteroid use, radiation changes, or other factors possibly affecting the lesion size on MRI. Therefore, this study is difficult to interpret. The largest study done to date is the randomized controlled Phase III trial (RTOG-9508) of WBRT alone versus WBRT and SRS (Table 5). This RTOG trial enrolled 333 patients with one to three brain metastases and a KPS of ≥70 [62]. The primary end point was overall survival. No statistically significant difference in overall survival was found between the WBRT and SRS and the WBRT alone arms (6.5 and 5.7 months, respectively, p = 0.1356). However, WBRT and SRS led to a significant decrease in local recurrence at one year despite the fact that 19% of the patients initially assigned to the SRS boost arm did not receive SRS for various reasons (18 versus 29%, p = 0.01). In planned subgroup analysis, increased median survival (6.5 versus 4.9 months; p = 0.039) was associated with WBRT and SRS in patients with a single brain metastasis. An SRS boost also resulted in improvement in KPS and decreased steroid use at six months, suggesting an improvement in the quality of life with the addition of the SRS boost. This is a very important observation, as the primary objective of treatment in patients with brain metastases is the improvement of their quality of life since their overall survival is often very poor. In an unplanned subgroup analysis, an OS benefit was associated with RPA class I, tumor size ≥2 cm, and squamous/NSCLC histology. However, these three cohorts are exploratory subsets that required an adjusted p value of 0.0056 to reach significance [63]. On multivariate analysis using Cox regression, however, RPA class I for both single and multiple lesions and lung primary histology for multiple lesions were found to be significant beneficial prognostic factors. Overall, this study is considered by most to be a negative trial with regard to major end points for multiple metastases. The third study is a three-arm study (SRS, SRS and WBRT, and WBRT alone) from Brown University, reported in abstract form [64]. Superior local control and fewer brain metastases were reported with the addition of an SRS boost. However, no p value was reported, nor was any attempt made to stratify for further surgery, which would have made this a six-arm trial (the size of this trial would not be large enough to support a meaningful analysis for this reason). Furthermore, the SRS dose was unconventional because the tumor dose was not individualized based on tumor size or volume. All of these methodological flaws made this study difficult to interpret. The role of adjuvant WBRT after SRS was investigated in a randomized Phase III trial, Japanese Radiation Oncology Study Group 99-1, by Aoyoma et al. [65]. The primary study end point was overall survival, but the study was not powered to detect any overall survival difference. This study randomized 132 patients with one to four brain metastases to SRS alone or SRS and WBRT. No survival difference was detected (8.0 versus 7.5 mo for SRS versus SRS and WBRT, p = 0.42). The one year intracranial failure rate was decreased with the addition of WBRT (46.8% versus 76.4%, p < 0.001). More importantly, the average time to deterioration based on the Mini-Mental Status Examination (MMSE) was 16.5 months in the SRS + WBRT arm and 7.6 months in the SRS alone arm (p = 0.05) [66]. The results from this study suggest that WBRT can decrease brain failure and its neurological sequalae when added to SRS.
To date, only limited investigations have directly compared surgery and SRS for asymptomatic patients with good functional performance status and limited numbers of brain metastases. In the randomized Phase III study by Roos et al., SRS and surgery were compared in the setting of adjuvant WBRT [67]. However, this study was closed owing to the slow accrual of only 19 patients. Due to the limited number of patients, no difference in CNS failure-free survival, overall survival, or intracranial control was found. In another randomized Phase III study by Muacevic et al.,70 patients were randomized to SRS or microsurgical resection plus WBRT [68]. The inclusion criteria were: single brain metastasis of ≤ 3 cm in an operable site, KPS ≥ 70, and controlled extracranial disease with a life expectancy of at least four months. This study was also closed prematurely due to poor accrual. The final analysis of 64 patients demonstrated no difference in terms of survival, neurological death rates, and local control. However, patients in the SRS alone group did experience more distant recurrences (p = 0.04). This difference was lost after the effects of salvage SRS were accounted for. SRS was associated with shorter hospital stays, less frequent and shorter timed steroid application, and less acute low grade toxicity. But no difference in toxicity profile and quality of life were observed six months after treatment due to the small numbers of patients experiencing adverse effects at that time (SRS: 2, Surgery + WBRT: 6). Due to lack of adequate accrual, no firm conclusions can be made with regard to selection criteria for SRS ± WBRT or surgery and adjuvant WBRT in patients with operable single brain metastasis. We believe that the selection of SRS or surgical resection as initial therapy depends on the size, location, and the presentation of neurological symptoms, as well as each institution's own policies. These two approaches are complementary in nature and are feasible for most patients as alternative treatment options.
Systemic Therapy and Radiosensitization
Other approaches to enhance the management of brain metastases have been investigated owing to the poor outcome after WBRT. As shown by Patchell et al., the intracranial recurrence rate after a median follow up of 15 weeks in patients with single brain metastasis treated with WBRT alone was 52% [41]. This rate could be worse in the setting of multiple brain metastases. Such investigations are especially important in the treatment of lung cancer, as it has the highest incidence of brain metastases among all malignancies. In fact, a primary lung cancer can be assumed in 30-70% of patients who have a single brain metastasis [69]. The systemic treatment of brain metastases has generally been difficult owing to the effectiveness of the blood-brain barrier in preventing most chemotherapeutic agents from reaching the CNS. However, the blood-brain barrier may be disrupted when the tumor grows to a certain size, leading to neo-angiogenesis of more permeable vessels. These changes can usually be seen on CT or MRI as the accumulation of contrast medium and the development of edema. In fact, response rates of brain metastases to chemotherapy alone of 43% to 100% for metastases from SCLC and 0% to 38% for metastases from NSLCL have been observed in small single-institution Phase II studies [69]. This has led to a series of prospective studies investigating the radiosensitizing effects of various systemic agents (Table 6). However, no such agent has demonstrated any survival benefit thus far, but several agents have demonstrated increased response rates when combined with WBRT: temozolomide (an oral alkylating agent), nitrosourea + tegafur (masked compound of 5-fluorouracil), motexafin gadolinium (a metallotexaphrin that localizes within tumors more than in normal tissues), and efaproxiral (an allosteric modifier of hemoglobin that leads to increased oxygen release into tissue). Overall, there is no strong evidence supporting the use of radiosensitizers with WBRT in current clinical practice.
Neurocognitive Functioning after Brain Irradiation
WBRT is associated with many acute, subacute, and late side effects. The acute toxicities such as fatigue, hair loss, and skin reaction are mild and self-limiting. The late toxicities are usually observed in patients with limited brain metastases and well controlled extracranial disease because these patients tend to survive longer. Late toxicities include diffuse white matter injury or cerebral atrophy and neurocognitive deficits. Neurocognitive function after cranial irradiation is being evaluated more closely as the efficacy of systemic therapy improves over time. This is of special importance in advanced stage lung cancer owing to the high frequency and short onset of brain metastases from lung cancer.
Neurocognitive impairment has been found frequently in long-term survivors of SCLC after prophylactic WBRT [81,82] and has been seen in patients with existing brain metastases as well. In a cohort of 98 patients with single brain metastasis, four of 38 patients (11%) who survived ≥ 1 year after postoperative WBRT developed severe dementia associated with ataxia and urinary incontinence [83]. All four patients were among a group of 23 patients (17%) who were treated with hypofractionated WBRT with fractions larger than 3 Gy/day. These toxicities were not observed in the patients who were treated with fractions ≤3 Gy/day. However, similar toxicities were seen in one patient who was treated with 3 Gy/day combined with intra-arterial chemotherapy. These findings suggest that large fractions and radiosensitizers, such as chemotherapy, may contribute to severe neurocognitive deficits in long-term survivors from brain metastases. However, such effects may not surface if the patients survive for less than one year. On the other hand, neurocognitive impairment was observed shortly after starting WBRT when patients underwent serial neurocognitive testing as shown by Welzel et al. [84]. But those authors recommended not avoiding WBRT since the neurocognitive dysfunction was restricted mainly to verbal memory. In addition, the risk of disease progression will always outweigh the risk of neurocognitive deficits secondary to brain irradiation since most recurrences can be associated with a neurologic deficit [85].
Some investigators believe that the neurocognitive outcome is directly related to intracranial tumor response after cranial irradiation, as neurocognitive deficits can be partially explained by intracranial tumor progression [86]. Furthermore, improvement in neurocognitive function in responding patients with multiple brain metastases also depends on the initial and posttreatment tumor volume [87,88]. In a study by Li et al., a battery of standardized neurocognitive tests was administered monthly for six months and then every three months until death by trained and certified nurses or clinical research associates to patients with unresectable brain metastases who were treated with WBRT [88]. At two months, patients with greater tumor shrinkage were found to have longer median survival, higher survival rate at one year, and longer time to neurocognitive deterioration. The cognitive gain was especially prominent in executive function and fine motor coordination. Nine patients were alive at 15 months, and the correlation between tumor shrinkage and executive function as well as fine motor coordination persisted. Furthermore, neurocogntive function was found to be influenced mostly by disease progression early on after WBRT. The patients who became long-term survivors also experienced larger tumor volume reductions after WBRT, and they had the best neurocognitive outcome.
The combination of SRS and WBRT over WBRT alone has been supported by the results of RTOG 9508 for patients with single brain metastasis, good functional performance status, and no active extracranial disease [62]. No difference in neurological deaths or mental status at six months between the two arms of this study was found. In addition, the rate of neurological deaths in the SRS boost arm was within the 25-50% range reported in other surgery or SRS series [62]. Because of the known toxicity associated with WBRT and the lack of any difference in survival between SRS alone and SRS plus WBRT as described in previous sections, the use of SRS alone as initial treatment for patients with a limited number of lesions has been advocated by many. The difference in neurocognitive function between patients undergoing SRS alone and those undergoing SRS and WBRT has been investigated in two prospective randomized controlled trials [66,89]. In the study by Aoyama et al. [66], neurocognitive function was assessed by serial MMSE after SRS + WBRT or SRS alone. No statistical difference in MMSE scores was found between the two arms, nor was any statistically significant difference found in the rate of MMSE score deterioration after a median follow up of 5.3 months. However, the time to neurological deterioration was significantly longer in patients who received SRS + WBRT than in those who received SRS alone (16.5 months versus 7.6 months, p = 0.05). This was thought to reflect the higher number of intracranial recurrences in the SRS alone group (11 versus 3 patients, p < 0.0001). Five patients who underwent SRS + WBRT, but none in the SRS alone arm, suffered a radiation toxic event. Although not statistically significant, a trend of continuous neurocognitive deterioration became prominent after 24 months in long-term survivors after SRS and WBRT. These findings from Aoyama et al. corroborate those from Regine et al. and Li et al. in that WBRT may help to improve neurocognitive function in patients with brain metastasis through its therapeutic effects on a short-term basis [66,86,88]. Moreover, significant numbers of patients treated with SRS alone may experience recurrence with neurological symptoms, leading to the recommendation that WBRT be used whenever indicated [85]. However, the late toxicity in terms of neurocognitive function from WBRT in long-term survivors cannot be ignored and warrants further investigation, as such effects may be masked owing to the short survival time of many patients on these studies of mostly far less than two years. Recently, the effects of initial treatment with SRS alone or SRS combined with WBRT on learning and memory function were investigated in a prospective randomized study by Chang et al. [89]. Most patients in this study had NSCLC, 1-2 lesions, and RPA class I or II. The GPA indices between the two arms were also well balanced. This study was designed to detect a 5-point decline in the Hopkins Verbal Learning Test-Revised (HVLT-R). The study was stopped when a significant decline in the HVLT-R score at four months was observed in the SRS plus WBRT arm compared with the SRS alone arm after accrual of 58 patients. Overall, the total recall difference persisted at six months. The patients who received SRS + WBRT demonstrated greater declines in executive function as well. Increased intracranial failure was observed in the SRS alone arm, with approximately 87% of the patients requiring salvage therapy. However, the one year survival rate was higher in the SRS alone arm (63% versus 21%, p = 0.003), possibly because of earlier systemic therapy in the SRS group and greater systemic disease burden in the SRS + WBRT arm. The authors argued for the initial treatment to be SRS alone with close follow up since intracranial recurrences are likely to be asymptomatic if discovered in their early stages by imaging studies.
Given the evidence described above, WBRT does seem to have a toxic effect on neurocognitive function over time. However, neurocognitive deterioration can be observed only in long-term survivors. Therefore, WBRT may be omitted in patients with good functional performance status and limited numbers of metastases if those patients have limited extracranial disease and are aware of the risk of intracranial failure associated with SRS alone and the risk of potential neurological deficits as a result of such failures. Thus, we recommend offering SRS alone to patients who can be monitored closely (e.g., every two months) with MRI. SRS plus WBRT should still be given serious consideration for patients with good functional performance status with controlled extracranial disease and single brain metastasis given the observed survival benefit observed in RTOG 9508 [62]. In contrast, WBRT can actually improve neurocognitive function of patients with radiosensitive tumors, such as lung cancer, poor prognosis, and a short lifespan. Thus WBRT should be recommended for such patients. In recent years, donepezil, a drug used to treat Alzheimer's disease, was shown to have a positive effect on the cognitive function of patients who underwent irradiation for brain tumors [90]. The potential role of memantine, an agent that blocks the pathologic stimulation of the N-methyl-Daspartate (NMDA) receptor (a receptor involved in learning and memory), in alleviating neurocognitive deficits from WBRT is being investigated in a randomized Phase III study, RTOG 0614, with results pending [91].
Also worth mentioning is the potential contribution of anticonvulsants to the development of late neurological symptoms from WBRT [92]. Therefore, any systemic agents (e.g., anticonvulsants, steroids) that can possibly influence the symptomatic outcome from brain irradiation should be carefully assessed and controlled for in future prospective studies to reach firm conclusions regarding the incidence of late radiation toxicity from brain irradiation.
SCLC
SCLC is known for its high risk of early hematogeneous dissemination, especially to the brain. Upon initial diagnosis, up to 24% of patients may have brain metastases when MRI of the brain is included as part of the staging evaluation [93]. Most patients with SCLC will ultimately develop brain metastases if they live long enough. Prophylactic WBRT has been advocated by many to delay the development of brain metastases and reduce the rate of distant relapse in the brain [94,95]. Many older randomized studies have demonstrated statistically significant reductions in brain metastases from 16-73% to 0-13% with the use of PCI, although none was able to demonstrate any survival benefit [96][97][98][99][100][101]. This is mainly from the lack of patient stratification based on tumor stage (limited versus extensive) and response to definitive therapy. However, PCI was suggested to improve survival in patients who had a complete response (CR) to induction treatment in several retrospective studies [102][103][104]. Subsequent randomized studies of PCI have focused on patients who have achieved a CR after initial treatment. Although these studies could not demonstrate a survival advantage with PCI individually, a 5.3% increase in 3-year OS in patients who received PCI (p = 0.01) was detected when individual data from 987 CR patients enrolled between 1965 and 1995 into seven trials comparing PCI to observation were analyzed in a meta-analysis [105]. The Most of the patients were men (75%) with good performance status (97%) and limited-stage disease (86%). CR in the chest was assessed by chest X-ray, bronchoscopy, or thoracic CT. The cumulative incidence of brain metastasis at three years decreased from 58.6% in the observation group to 33.3% in the PCI group (p < 0.001). A trend toward decreased risk of brain metastasis was observed with increased radiation dose when four dose regimens were compared (8 Gy/1 fraction, 24-25 Gy/8-12 fractions, 30 Gy/10 fractions, and 36-40 Gy/18-20 fractions, p = 0.02). In addition, PCI seems to have a greater effect on the incidence of brain metastases if delivered sooner after induction therapy (p = 0.01). The association of PCI and a survival benefit was demonstrated again in CR patients in another meta-analysis of 12 randomized trials involving 1547 patients by Meert et al. [106]. Based on these meta-analyses, PCI became a part of the standard of care for SCLC patients in CR.
The survival advantage associated with PCI was also demonstrated in patients with extensive stage SCLC who had no response to four to six cycles of chemotherapy [107]. Disease-progression-free survival was significantly longer in the PCI group (14.7 weeks versus 12.0 weeks, p = 0.02), as was median survival (6.7 months versus 5.4 months, p = 0.003). The risk of symptomatic brain metastases was significantly decreased with PCI at one year (14.6% versus 40.4%, p < 0.001). Notably, brain imaging was not required for this trial. Therefore, many patients in the PCI group may have been treated for asymptomatic brain metastasis.
Based on the evidence summarized above, PCI should be offered to any patient with limited stage SCLC with CR after initial treatment, or extensive stage SCLC with any response after initial chemotherapy as the standard of care.
Although improvement was seen with PCI, a 33% incidence of brain metastases is still observed three years after PCI as demonstrated in a meta-analysis by Aupérin et al. [105]. Because of the poor prognosis associated with brain metastases after PCI and a possible dose-response effect observed in the same meta-analysis, a Phase III randomized prospective study was conducted by the PCI Collaborative Group to address the question of dose effects in patients with limited stage SCLC and CR after definitive therapy [108]. The standard dose of 25 Gy/10 fractions was compared to 36 Gy delivered in either 18 daily fractions or 24 twice-daily fractions in this study. No statistically significant difference in the total incidence of brain metastases was found at two years between the two dose groups (29% standard dose groups versus 23% higher dose group, p = 0.80). However, a significantly lower incidence of brain metastases as the first site of failure at two years was observed in the higher dose group (6% versus 12%, p = 0.005). The higher dose group had a lower two year overall survival rate (37% versus 42%, p = 0.05), which was most likely due to increased intrathoracic failure in this group relative to the standard dose group (48% versus 40% at two years, p = 0.02). These findings imply that intrathoracic disease control affects both the incidence of brain metastases after PCI and overall survival after multimodality treatment. On the other hand, these findings may also reflect the heterogeneous T and N categories of the patients among the study arms, possibly leading to the prevalence of poorer intrathoracic control for locally advanced tumors seen more commonly in one arm over another. Furthermore, the utility of higher doses for PCI is only rational in a select group of patients in whom intrathoracic disease is well controlled. Currently, 25 Gy delivered in 10 fractions is still recommended as the standard of care given the lack of evidence for increased intracranial control associated with higher doses and the concern over potential adverse effects on neurocognitive function from WBRT. However, other dose fractionation regimens are reasonable alternatives as well (e.g., 30 Gy in 15 fractions).
NSCLC
The development of brain metastases is also prevalent in NSCLC. In stage III patients, the incidence of brain metastases during the course of treatment can reach approximately ≥ 50% [109,110]. Nonsquamous histology, bulky mediastinal nodes (>2 cm), increased numbers of positive mediastinal nodes, involvement of several nodal stations, younger age, the use of neoadjuvant chemotherapy, and prolonged survival were all found to be associated with the increased incidence of brain metastases in various studies [110][111][112][113][114][115][116][117]. Brain metastasis is also usually the most common site of distant failure [118,119]. The results of selected studies on the incidence of brain metastasis after combined modality treatment are presented in Table 7. Because of the poor prognosis associated with brain metastases and its prevalence in locally advanced NSCLC, the potential role of PCI has been investigated in several studies (Table 8). Although found to significantly decrease the incidence of intracranial metastases by some [121][122][123][124][125], no randomized study has been able to demonstrate any survival benefit from PCI in locally advanced NSCLC. Thus, PCI is currently not a part of the standard of care for locally advanced NSCLC. However, the prognostic factors found in retrospective studies may guide the section of patients for future prospective randomized studies to identify a subgroup of patients for whom PCI can lead to a survival benefit.
Neurocognitive Functioning after PCI
Significant late toxicity has been reported for patients with SCLC treated with PCI and concurrent chemotherapy or treated with large fractions [126]. Two randomized controlled trials have specifically examined neurocognitive function after PCI for SCLC [127,128]. Arriagada et al. randomized 300 patients with SCLC in complete remission to PCI versus observation in a prospective study from France [127]. Neuropsychological assessment was performed at baseline and on follow up to 48 months by a neurologist for 229 patients. The results of 83% baseline testing were considered normal in both arms. Overall, no difference was found between the two treatment arms in terms of higher functions, mood, walking, cerebellar function, tendon reflexes, sensibility, or cranial nerve function after two years. No statistically significant difference in the two year rates of abnormalities between the two arms was observed. In a similar study by Gregor et al. [128], 314 patients with limited stage SCLC in CR were randomized to PCI or no PCI. Neurocognitive function was formally assessed with a battery of tests, including the national adult reading test, the paced auditory serial addition task, the Rey Osterrieth complex figure test, and the auditory verbal learning test. The quality of life and anxiety and depression were assessed with Rotterdam symptom checklist and the Hospital anxiety and depression scale. Tests of cognitive function revealed cognitive impairment in 24% to 41% of patients in each group. However, no significant difference was found between the two arms at the baseline. Furthermore, no difference between the two arms was observed in neurocognitive function or gross quality of life, level of anxiety, or depression at one year. Findings were similar in a recent prospective randomized trial evaluating PCI in locally advanced NSCLC by Pöttgen et al. [124]. Of 11 evaluable long-term surviving patients, no deficits in attention, memory, associative learning, and information processing were found by using a battery of neurocognitive tests in patients who received PCI and those who did not. However, this lack of difference can also be explained by the small number of patients. In contrast, increased decline in both immediate and late recall was observed in the Hopkins verbal learning test at one year when patients with stage III NSCLC underwent PCI 30 Gy in 15 fractions in the Phase III prospective randomized study RTOG 0214 [125]. These findings may clarify neurocognitive function after PCI as the findings mature. The inclusion of neuropsychometric testing has not been common practice in the past. Its inclusion in current and future trials will enhance our understanding of the long-term neurocognitive effects of PCI and WBRT in general.
Local Therapy for Synchronous, Solitary Brain Metastasis from NSCLC
Some proportion of patients with NSCLC present with synchronous brain metastasis. Five year overall survival rates of over 20% have been reported after both the brain and the primary site were treated aggressively (Table 9). [134] 18 I-III S/SRS S 27 Abbreviations: S: Surgical resection; SRS: stereotactic radiosurgery; Chemo: chemotherapy; WBRT: whole brain radiotherapy After SRS, overall survival was significantly higher for those given definitive (as opposed to nondefinitive) thoracic therapy in a study of 42 patients by Flannery et al. [130]. Furthermore, higher survival was demonstrated to be associated with early stage disease in the chest and better KPS [129,130,132]. A survival benefit was also observed for patients with more than one synchronous brain metastasis when those patients also had good functional performance status and received thoracic therapy [135][136][137]. All of these findings support the delivery of local therapy to the chest for patients with good functional performance status and limited numbers of brain metastasis. However, the survival benefit from local therapy still needs to be validated in a prospective randomized controlled trial.
Future Investigations
Given the lack of any survival benefit demonstrated for adjuvant WBRT after surgical resection or SRS and its potential neurotoxicity, the utility of SRS to provide a boost dose to the tumor bed after craniotomy for patients with limited numbers of brain metastases has been investigated in recent years. In a study of 72 patients (43% NSCLC) with 1-4 brain metastases and 76 cavities after surgical resection, a median dose of 18.0 Gy was delivered to the median 79% isodose line at the periphery of the tumor bed [138]. The actuarial local control rate in this study was 79% at two years, and the distant control rate was 47% at 12 months. Three patients underwent surgical resection of a region of necrosis. Use of less conformal plans translated into a local control rate of 100%, thus the authors recommended a planning target volume margin of 2 mm around the resection cavity. In a similar study of 52 patients (46% NSCLC) with up to four lesions, a local failure rate of 7.7% was observed after a median follow up of 13 months [139]. The distant failure rate was 44% after a median of 16 months after resection, and the median survival was 15 months. Similar results have been reported in other single-institution studies [141,142]. Equivalent local control between surgery followed by adjuvant WBRT or SRS is suggested by these small studies, but these findings remain to be validated in a prospective randomized study. However, the risk of distant recurrence remains high with adjuvant SRS alone, making this approach inappropriate for patients with solitary brain metastasis, good functional performance status, and primary disease controlled locally because of the potential for aggressive treatments to improve survival in these patients, especially for patients with lung cancer [142,143]. Therefore, WBRT may still be warranted in patients with good prognosis in addition to surgery and adjuvant SRS, if the potential toxicity is tolerable. The feasibility of this approach was investigated in a small study of 27 patients (70% NSCLC); the actuarial two-year local control was 94%, and the two year actuarial incidence of new brain metastasis was 30% [144]. Only one patient required reoperation for symptomatic radiation necrosis at 16 months after treatment. The median survival was 17.6 months. Whether this approach will lead to a survival benefit still requires further investigation in a randomized trial.
As mentioned previously, SRS can spare adjacent normal tissue by achieving a sharp dose gradient at the periphery of the tumor target volume. However, this advantage is diminished with large lesions. To spare normal brain tissue, the dose delivered needs to be decreased to avoid potential neurotoxicity [51,145]. The tumor response is usually impaired as a result of this [146]. Therefore, fractionated stereotactic radiotherapy has been proposed owing to the advantages of reoxygenation of hypoxic cells within large lesions and significant increases in late-responding tissue sparing if the radiation dose is fractionated [147]. Thus, the therapeutic ratio can be significantly increased when large brain metastases are treated with a high dose delivered in several fractions with a stereotactic set-up. This concept has been validated in a small study, in which patients with large brain metastases (average volume 21.2 cm 3 ) were treated safely with fractionated stereotactic radiotherapy, and a local control rate of 83% has been achieved [148]. These findings were confirmed in a larger retrospective study of patients with large brain metastases treated with this technique [149]. Similarly, many singleinstitution studies have shown excellent clinical outcome and toxicity profile from fractionated stereotactic radiotherapy with or without use of a frame (Table 10). These studies suggest that local control seems to be related to tumor size [155] and that intracranial control outside of the treated area seems to be poor when fractionated stereotactic radiotherapy is used alone. However, its combination with WBRT was shown to be feasible with good intracranial control and toxicity profile [156,157], which may be a better option for patients who present with neurological deficits from large, but few, brain metastases. However, the patient selection criteria for this technique, alone or as a boost, needs further investigation in prospective studies. Also, it is important to be aware that a bigger margin than that used for SRS may be needed when patients are not precisely immobilized. In patients with good prognostic factors, such as 1-3 brain metastases, RPA class 1-2, controlled extracranial disease, GPA of ≥ 2.5, young age, and high KPS scores, aggressive treatment of brain metastasis with WBRT followed by a regular external beam boost to the gross tumor or the surgical bed has been investigated. This approach has been consistently shown to be associated with increased local control comparable to that reported for WBRT + SRS, as well as a median survival time of over 12 months, significantly better than WBRT alone with or without surgical resection [158,159]. Radiation delivered as a simultaneous integrated boost with intensity-modulated radiotherapy has been shown to produce a sharper dose gradient than that from WBRT followed by SRS for the treatment of brain metastasis [160,161]. This leads to improved normal tissue sparing owing to the ability to optimize the dose to the normal brain and to account for dose spillage from the boost dose to the adjacent brain tissue in the planning of WBRT. This not only spares more normal brain tissue but also shortens treatment time for patients with multiple lesions. The feasibility of this approach has been demonstrated in a single-institution Phase I study through the use of helical tomotherapy, which combines delivery of intensity-modulated radiotherapy with megavoltage CT imaging for precise radiation delivery through image guidance [162]. In this study, 48 patients (50% of whom had lung cancer) were treated with WBRT 30 Gy in 10 fractions and a simultaneous integrated boost to the brain metastases safely escalated from 5 to 30 Gy in 10 fractions. No grade 3-5 dose limiting toxicity was encountered. However, this study had a median follow up of only 7.72 months and a median overall survival time of only 5.29 months. Given the small number of patients in this study, no firm conclusions can be made regarding tumor response and survival outcome. Worth mentioning is the application of helical tomotherapy in the treatment of recurrent brain metastases from lung and breast cancer with a simultaneous integrated boost. The safe delivery of 30 Gy to the gross disease and 15 Gy to the whole brain in 10 fractions for up to 11 lesions was reported by Sterzing et al. [163]. In this report, an excellent dose conformality index was achieved, and no severe toxicity was observed; the patients remained recurrence-free at six and 12 months of follow up. Based on this limited evidence, the simultaneous integrated boost approach may be an excellent treatment approach with intensity modulation of doses to the target and the adjacent brain tissue. Depending on the degree of immobilization, no standard has been established regarding the planning target volume margin for gross disease, and margins from 0 to 10 mm have been reported [158,[161][162][163]. Further conclusions regarding this matter can be made as the existing data mature. Recently, excellent dose sparing of radiosensitive structures, such as the hippocampus, was reported when brain metastases were treated with the simultaneous integrated boost approach, which further supports the use of this approach for normal tissue sparing [164,165].
Other future work involves targeting therapy to the epidermal growth factor receptor (EGFR) family of four homologous receptors, EGFR (ERBB1), HER-2/neu (ERBB2), HER-3 (ERBB3), and HER-4 (ERBB4). EGFR activation leads to receptor tyrosine-kinase activation and activation of a series of downstream signaling activities that mediate tumor cell proliferation, migration, invasion, and the suppression of apoptosis [166]. As a result, inhibiting EGFR by binding its intracellular adenosine triphosphate-binding site with small-molecule tyrosine kinase inhibitors has been investigated as a treatment strategy for NSCLC [167]. Among these inhibitors, erlotinib has shown a survival benefit when combined with chemotherapy for advanced stage NSCLC [168]. However, tumor response is mainly limited to patients who possess somatic mutations in the kinase domain of the EGFR gene [169]. These patients are usually of East Asian descent, female, nonsmokers, with adenocarcinoma [170]. In a small series of 41 patients with brain metastasis from lung adenocarcinoma, gefitinib was shown to have antitumor activity (10% major response). Intracranial control was associated with previous WBRT [171]. In another study by Kim et al. [172], median progression-free survival and overall survival times of 7.1 and 18.8 months were observed in East Asian, nonsmoking patients with lung adenocarcinoma and asymptomatic synchronous brain metastasis after treatment with either gefitinib 250 mg or erlotinib 150 mg once daily. Both studies suggest a potential role for EGFR inhibitors in the treatment of brain metastases. Additive effects may be produced when both EGFR inhibitors and WBRT were delivered, as patients who received both treatments have better disease control and longer overall survival [171,172]. Several studies have demonstrated increased response to EGFR inhibitors, as well as prolonged time to intracranial progression and improved overall survival, in patients with mutations in the EGFR gene [173,174]. The presence of EGFR mutations was shown to enhance radiation response; for patients with EGFR mutation and brain metastases from lung adenocarcinoma, WBRT delivered concurrently with EGFR inhibitors produced a response rate of 84% [175]. However, severe toxicities, including grade 5 interstitial lung disease, have been reported in patients treated with concurrent erlotinib and WBRT [176,177]. The unexpected lung toxicity from EGFR inhibitors needs further investigation for the safe administration of these drugs.
Conclusions
In summary, patients with brain metastases from lung cancer have several treatment options, which are summarized in Figure 1. The choice of treatment will greatly influence the overall prognosis for patients with advanced stage lung cancer. Based on current evidence, combined modality treatment of brain metastases has greatly improved the survival of patients with single lesions, good functional performance status, and controlled extracranial disease, as demonstrated in prospective randomized studies. Neurocognitive deterioration remains a concern for patients with excellent functional performance status who are receiving WBRT ± SRS. However, radiotherapy may improve neurocognitive function in a select group of patients who present with neurological impairment from brain lesions at baseline shortly after treatment. PCI for SCLC is currently part of the standard of care, but PCI for NSCLC is still investigational. Local therapy should be considered for patients with early stage intrathoracic disease and brain as the sole site of metastasis. To further improve treatment outcome for brain metastasis, options including an SRS boost to the surgical bed alone, fractionated stereotactic radiotherapy, and WBRT with a simultaneous integrated boost are currently under investigation. Among these options, fractionated stereotactic radiotherapy allows the delivery of a high dose in a few fractions, which is a more biologically sound approach for large lesions. This technique can also potentially decrease the toxicity from SRS because a lower dose is delivered per fraction over multiple fractions, thus greatly reducing the risk of late normal tissue damage. WBRT with a simultaneous integrated boost allows dose optimization such that a high dose is given to the target volume while the dose delivered to the whole brain is kept below a certain threshold. This achieves increased tumor dose, while sparing as much normal brain tissue as possible to prevent neurological toxicity from radiotherapy. Such an approach holds great promise in the future. Radiosensitization is not currently indicated clinically. However, the use of EGFR inhibitors ± WBRT has demonstrated good response of intracranial disease in patients with EGFR mutations, and this strategy also warrants further clinical investigation.
|
v3-fos-license
|
2020-01-02T21:11:27.013Z
|
2020-01-10T00:00:00.000
|
213749675
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/2053-1591/ab6374",
"pdf_hash": "5e5aafd2e44eddb50b61e4a24c284ba0e6d63bf9",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46468",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "07ea9979008530588290f8a47c56db9dc49bf5d8",
"year": 2019
}
|
pes2o/s2orc
|
CoMoCrSi coatings prepared by high-velocity oxygen fuel spraying: microstructure and mechanical properties at elevated temperatures up to 800 °C
The microstructures, mechanical properties, and tribological behaviors from room temperature (RT) to 800 °C of HOVF-sprayed CoMoCrSi coatings were investigated in detail. The as-sprayed CoMoCrSi coatings were found to be predominantly composed of intermetallic Laves phases, i.e., Co7Mo6, Co3Mo2Si, Cr3Si, and some amorphous phases. The as-sprayed coatings possessed a compact and typical lamellar microstructure and balanced mechanical property; their Vickers hardness decreased from 855.9 ± 16 HV5.0 at RT to 583.9 ± 10 HV5.0 at 800 °C due to a normal soft phenomenon of material in hot environment. Further, between room temperature and 400 °C, the as-sprayed coatings suffered serious mechanical wear without any lubricant tribolayer forming on the worn surface, indicating that they would not function as good anti-wear materials at low temperatures. In particular, the coatings exhibited a brittle fracture coupled with abrasive wear at RT, obvious abrasive wear at 200 °C, and severe adhesive wear at 400 °C that with the highest friction coefficient of 0.65 and wear rate of 35.79 × 10−6 mm3/(N·m). As the test temperature increased to 600 and 800 °C, the friction coefficient of the coating decreased to 0.45 and 0.26, respectively, and the corresponding wear rates reached 0.135 × 10−6 mm3/(N·m) and 0.288 × 10−6 mm3/(N·m), with a difference of approximately two orders of magnitude between the low- and high-temperature wear rate. This result further confirmed that the as-sprayed coatings are a better choice of abrasion-resistant materials for high-temperature applications. After sliding tests at 800 °C, numerous metallic oxides i.e., Co3O4, MoO3, and bimetallic oxides such as CoMoO4 and Co2CrO4 of nanometer size (50–100 nm) were identified in the continuous protective layer formed on the worn surface. These oxides played an important role of lubrication and reduced direct contact between the coating and its counterpart during the sliding process, leading to a decrease in the friction coefficient and material loss. The main wear mechanisms of the coatings at this temperature range are slight adhesive wear coupled with abrasive wear.
Introduction
As one of the members of the Tribaloy family of alloys, CoMoCrSi has excellent strength, hardness, as well as superior anti-wear and anti-corrosion resistance between room temperature (RT) and 800°C. It is therefore suitable for industrial applications, such as in aerospace, turbine, oil, pump, energy, and mining sectors [1][2][3]. RECEIVED And that is mainly because the alloy contains a large volume fraction of a hard, intermetallic Laves phase in a much softer Co-based alloy matrix [4]. In particular, the main alloying elements of the Co-based Tribaloy alloys are molybdenum, chromium, and silicon, among which silicon is a minor (∼3 wt%) constituent [5]. Moreover, the Laves intermetallic phase is composed of Co, Mo, and Si with an approximate composition of Co 3 Mo 2 Si or CoMoSi, in which Mo and Si could improve the strength and wear properties of the Co-based matrix up to temperatures as high as 1230°C, Cr could contribute to high corrosion resistance without the degeneration of the anti-wear resistance [3,6].
Over the last few decades, various thermal spraying technologies, such as atmospheric plasma spraying (APS) [7], cold spraying [8][9][10], arc spraying [11], high-velocity oxygen fuel (HVOF) spraying [12,13], and highvelocity air spraying (HVAF) [14,15], have been developed and widely applied in the preparation of coatings. Further, several studies with respect to the as-sprayed CoMoCrSi coating have attracted extensive attention of the researchers worldwide. Wang et al [1] studied the cavitation erosion of APS-sprayed CoMoCrSi coatings before and after heat treatment at 800 and 1000°C; the results indicated that heat treatment can significantly reduce the mean erosion depth of the coatings and the cavitation damage mainly involves the removal of splashes and delamination. Cai et al [2] prepared HVOF-sprayed CoMoCrSi coatings using different spraying parameters, and reported that the properties of the coatings possessed good adaptability to the spraying parameters, and the main wear mechanisms changed from abrasive wear and delamination at RT to adhesive wear at elevated temperatures, up to 400°C. Lusvarghi et al [16,17] examined the dry sliding performance of heat-treated CoMoCrSi coatings only at room temperature; they reported that heat treatment at 600°C caused the appearance of submicrometric crystalline regions and improved the hardness as well as the elastic modulus of the coating, and that the friction coefficient and wear rate were definitely reduced. D'Oliveira aimed to understand the oxidation of CoCrMoSi coatings at 450 and 750°C, and clarified the role of the oxide layer on the wear behavior of the coating at room temperature [18]. Further, Renz et al assessed the high-temperature sliding wear behavior of Tribaloy ® T400 block (chemical composition in wt%: 28.5 Mo, 8.5 Cr, 2.6 Si, and Co balanced), but carried out abrasive wear tests only at 40, 400, and 600°C [19]. However, actual industrial processes and applications involve many wear modes such as ball on disc, pin on disc, ring on block, normal temperature, high temperature, reciprocating type, and rotation type. According to the available reports, a systematic investigation of the mechanical property, tribological performance and wear mechanism of as-sprayed CoMoCrSi coatings at various temperatures has rarely been performed to date. Therefore, it is essential to assess the mechanical properties and wear behaviors of as-sprayed CoMoCrSi coatings from room temperature to 800°C.
The aim of the present investigation was to fabricate the CoMoCrSi coating on stainless steel 316 L alloys by the HVOF-technique, and systematically investigate the microstructure, mechanical properties, as well as the tribological performance in the ball-on-disc mode at different temperatures (RT, 200, 400, 600, and 800°C). Moreover, the wear mechanisms of the as-sprayed CoMoCrSi coatings at different temperatures were investigated and analyzed comprehensively to provide guiding principles for the application of CoMoCrSi coatings in sliding wear environment over a wide temperature range.
Preparation of the coatings
A feedstock powder (Metco Diamalloy 3001) with a nominal chemical composition of 28.5 wt% Mo, 17.5 wt% Cr, 3.4 wt% Si, and 50.6 wt% Co, hereafter referred to as a CoMoCrSi alloy, was obtained from Sulzer Metco. A scanning electron microscopy (SEM) image showing the surface morphology of the CoMoCrSi powder is presented in figure 1(a), where it is apparent that the powder particles are spherical with sizes ranging from 5 to 45 μm. The cross-sectional SEM image of the CoMoCrSi powder particle ( figure 1(b)) confirms that the particles have a compact internal texture and the elements are uniformly distributed in the particle. Moreover, energydispersive x-ray spectroscopic (EDS) investigation (figure 1(c)) basically agrees with the nominal chemical composition of the CoMoCrSi powder. The CoMoCrSi coatings were fabricated by a Diamond Jet 2700 HVOF spraying equipment (Sulzer Metco, USA) manipulated with an IRB 2400/16 robot (ABB, Switzerland). Specifically, natural gas was used as the fuel, oxygen as the combustion improver, and nitrogen as the carrier gas for the powder feed, and the detailed spraying parameters are shown in table 1. Stainless steel 316 L alloy was used as the substrate. To improve adhesion between the coating and substrate, the substrates were first sandblasted, and then ultrasonically cleaned in a mixture of alcohol and acetone for 20 min.
Characterization of the coatings
A scanning electron microscope (TESCAN MIRA3, Czech Republic) equipped with an energy-dispersive x-ray spectrometer was employed to characterize the microstructures and worn surface morphologies of the assprayed coatings. The porosity of the as-sprayed coating was measured using an image analysis software (ImageJ-NIH, Bethesda, USA) in accordance with the ASTM E2109-01(2014) standard for determining the area percentage porosity of thermally sprayed coatings. The porosity of the coating reported herein is the average value of five measurements carried out on randomly acquired images of the cross-section of the as-sprayed coating after 2000× magnification. X-ray diffraction (Rigaku D/max-RB, Japan, λ=0.15 nm) was performed to determine the phase composition of the coatings before and after the wear tests; XRD was carried out in the angular range of 10-100°using Cu Kα radiation at a voltage of 40 kV and scan speed of 10°min −1 . The XRD patterns were analyzed using the Jade 6.5 software based on the standard ICSD pattern (51/54529) data files.
An MH-5-VM microhardness tester was employed to measure the cross-sectional harness and surface microhardness of as-sprayed coatings, with a load of 300 g and dwelling time of 10 s. Vickers hardness of asspayed coating was assessed using a high-temperature Vickers hardness tester (Archimedes, HTV-PHS30, UK) at room temperature (RT), 200, 400, 600, and 800°C respectively. The tester was performed at a load of 5000 g and dwelling time of 5 s in low-vacuum oxygen-free environment. The specimens for the Vickers hardness were fixed at period of 5 min at each test temperature before indentation. Ten indents were implemented on every specimen, and the average of ten measured values is reported as the final result for both tests. In addition, the test parameters are different between the microhardness and Vickers hardness owing to the blurred and irregular indentation profiles under small applied load at high temperatures during the Vickers hardness test. Furthermore, the nano-mechanical properties of the oxide layer was assessed using a NHT02-05987 nanoindentation tester (CSM, Switzerland, specific test parameters: normal load=10 mN, loading rate=20 mN min −1 , dwelling time=10 s).
The friction and wear behaviors of the as-sprayed coatings were evaluated using a ball-on-disc tribometer (CSM, Switzerland) at room temperature (RT), 200, 400, 600, and 800°C in ambient atmosphere with a relative humidity of 30±5%. The tests were performed thrice to reduce the error and after maintaining for a fixed period of 20 min at different temperatures before the sliding friction. Moreover, before the tests, all specimens (with a size of Φ 25 mm×7 mm) were polished to reduce the surface roughness, then cleaned ultrasonically with acetone for 10 min, and dried with nitrogen. All tests were conducted with a normal load of 5 N, line velocity of 10 cm s −1 , rotating radius of 5 mm, and sliding distance of 200 m. Al 2 O 3 ceramic balls (2400 HV, Ra 0.1 μm) with a diameter of 6 mm and density of 3.92 g cm −3 were used as the counterparts. The volume loss of the coating after the wear test was determined using a Micro-XAM-3D non-contact surface profiler (ADE Corporation, Massachusetts, USA). The wear rate (K W , mm 3 /N·m) was calculated using the equation, K W =V W /(P×L), as reported previously [20], where V W is the wear volume loss in mm 3 , P is the normal load applied in newton (N), and L is the sliding distance in meter (m).The reported wear rates are the average values of five measurements. The phase composition of the worn surface after the tests was determined by the analysis of Raman spectra (Horiba Raman microscope, 532 nm He-Ne laser) recorded in the range of 100 to 1800 cm −1 .
Microstructures of the coatings
The XRD patterns of the CoMoCrSi alloy powder and as-sprayed coatings are shown in figure 2. As indicated, the as-received CoMoCrSi powder registers a crystalline structure and consists mainly of a high fraction of hard intermetallic Laves phases, such as Co 3 Mo 2 Si (PDF#30-0449), Co (PDF#05-0727), CrSi 2 (PDF#35-0781), and Co-Mo intermetallic Co 7 Mo 6 (PDF#29-0489) [1,21]. However, the XRD pattern of the as-sprayed coating displays only two diffraction peaks at 43.6°and 78.8°, which was identified as a solid solution of Co and Cr and could be assigned to the Co 7 Mo 6 (PDF#29-0489) and some intermetallic Laves phases such as Co 3 Mo 2 Si (PDF#15-0491), and CrSi 2 (PDF#35-0781) [22,23]. As already revealed by other groups [20,24], peaks of numerous phases disappear in the XRD pattern of the coating presumably because they were entirely dissolved in the melt matrix and their reprecipitation was hindered by impact quenching. The widened diffraction peaks at 43.6°and 78.8°indicate the presence of amorphous phases in the as-sprayed coatings. As a kind of precipitationstrengthening alloy, a thermally sprayed coating usually has amorphous (metallic glasses) structures, which is also consistent with the nature of other CoMoCrSi coatings deposited by HVOF-technique [2,16,17]. As reported elsewhere [11], the melting and rapid solidification of the material during the spraying process result in the formation of the coating, and the transformation of phases and crystallization were undoubtedly hindered to some degree. The presence of amorphous phases in the coatings can be ascribed to the extremely high cooling rates experienced by the melted particles when they impact the relatively cold substrate at a specific velocity, which is conducive for the formation of amorphous phases. Furthermore, some intermetallic Laves phases in the form of the amorphous microstructure (metallic glass state) existing in the as-sprayed coatings could enhance the corrosion resistance, wear resistance, and high-temperature strength [21]. Figure 3 shows the top-surface and cross-sectional SEM images of the as-sprayed CoMoCrSi coating. It can be apparently seen that the as-sprayed coating exhibits a relatively rough surface. Pores, molten area, and halfmolten particles are found on the top surface, as pointed by the arrows in figure 3(b). It can be concluded that the rapid impact of the molten and half-molten particles with a high velocity on the substrate surface at the end of the spraying process could account for the relatively rough surface. Moreover, pores might have formed owing to the lack of follow-up particles striking the as-surface. However, the cross-sectional images display a compact structure for the as-sprayed coating with a thickness of 350 μm, as shown in figure 3(c). It is obvious that the assprayed coating has a typical lamellar structure with very few tiny pores (as pointed by arrows in figure 3(d)) randomly distributed throughout the coating; moreover, no significant cracks are observed within the coating or in the area at the interface with the substrate. The absence of unmelted particles within the as-sprayed coating can also be confirmed, indicating that the as-received CoMoCrSi powders have been fully melted and well deposited on the substrate. The as-sprayed coating registers a low porosity value of 0.53±0.1%, as determined by the porosity analysis of the cross-sectional SEM images with a magnification of 2000×, which is similar to that of HVOF-sprayed Stellite-6 coatings with a porosity of 0.6%-4.9% [24] and the plasma-sprayed CoMoCrSi coatings with a porosity of 0.42±0.1% [1], as well as the HVOF-sprayed CoMoCrSi coatings prepared under different spraying parameters [2]. Figure 4 displays the variation in the microhardness starting from the substrate to the as-sprayed coating. It should be note that the indentations were carried out on the selected homogenous areas to exclude the influence of the microstructural defects. The microhardness of the as-sprayed coating is obviously much higher than that of the substrate. Particularly, the substrate exhibits microhardness of 241 HV 0.3 , while the coating registers an average cross-sectional microhardness of 890.6±19 HV 0.3 and surface microhardness of 902.8±32 HV 0.3 (as shown in the inset in figure 4; the values are the averages of the microhardness values of ten different spots. Thus, there is approximately a four-fold difference between the microhardness of the as-sprayed coating and substrate. Moreover, the hardness of the surface and cross-section of the coating is not significantly different, which indicates that the as-sprayed coatings exhibit a balanced mechanical property. In addition, the microhardness of the as-sprayed coating is also higher than those of the HVOF-sprayed CoCrW coatings (635±75 HV 0.3 ) [25] and the HVOF-sprayed CoMoCrSi coatings obtained with different spraying parameters (550-650 HV 0.3 ) [2]. The uniform compact structure (figure 3) combined with certain amount of amorphous phases present in the as-sprayed coating can account for its higher hardness; in particular, no slip planes are available for easy dislocation movement under the cut-in load because of the lack of an ordered crystalline structure, which definitely hinders the plastic deformation, leading to higher hardness [17]. As shown in figure 5, the Vickers hardness values of the as-sprayed coatings obviously decrease with an increase in the temperature. In particular, the Vickers hardness values drop from 855.9±16 HV 5.0 at RT to 656.2±22 HV 5.0 at 400°C (drops by 23.3%) and to 583.9±10 HV 5.0 at 800°C. Further, the drop in hardness is quite small at the temperatures above 400°C. The results coincide well with those of previous available studies [19], where the hot hardness of Tribaloy ® T400 materials (with a similar chemical composition as that of Diamalloy 3001) drops from 724±22 HV at 40°C to 449±6 HV at 400°C and 294±4 HV at 600°C, in which the hot hardness corresponding to macro hardness are given in estimated equivalent Vickers hardness. It can also be inferred that the hot hardness of the as-sprayed coatings is not influenced by the thermal hardening or oxide formation, which is a normal softening phenomenon of materials in hot environment, because the indentations are made in a short time and in low vacuum condition with argon. In addition, the minimal difference between the microhardness (902.8±32 HV 0.3 ) and the Vickers hardness (855.9±16 HV 5.0 ) at RT can be ascribed to the error associated with the measurement instrument and normal applied load, which can therefore be ignored. Figure 6 demonstrates the steady-state friction coefficient and wear rate of the CoMoCrSi coating at different test temperatures. The values were recorded over the entire duration of the test and those were averaged from three individual tests. As indicated, the coating exhibits the highest friction coefficient of 0.65 at 400°C and the lowest friction coefficient of 0.26 at 800°C. In particular, the friction coefficient of the as-sprayed coating decreases slightly from 0.57 at RT to 0.52 at 200°C, then increases to 0.65 at 400°C, showing the same variation trend as the T-400C alloy in the temperature range of RT to 450°C [4]; the T-400C alloy (14 wt% Cr, 26 wt% Mo, 2.6 wt% Si, Co balanced) was developed with increased Cr content based on the conventional T-400 alloy and still belongs to the Co-Mo-Cr-Si Tribaloy family. Then, the friction coefficient decreases to 0.45 at 600°C and 0.26 at 800°C, which is superior to that of the HVOF-sprayed Stellite-6 coating with a friction coefficient of 0.43 at 800°C [24]. Compared with the steady-state friction coefficients, the real-time friction coefficient curves as a function of the friction distance show a random variation, which is more intuitive for describing the friction process at different test temperatures. Therefore, the real-time friction coefficient curves recorded at different test temperatures are used to analyze the sliding process ( figure 7). There are indeed obvious variations in the friction characteristic at different test temperatures. As indicated, the friction coefficient curves in the temperature range of RT-400°C fluctuate more markedly compared to those of 600°C and 800°C. At high temperatures (600 and 800°C), the coatings show the same flat curves after a short running-in period and maintain a relatively stable state over a distance of more than 180 m. It can be concluded that lower the friction coefficient is, the smoother and steadier the friction state is. This directly indicates that the as-sprayed coatings register satisfactory friction properties at high temperatures. Consequently, it could also be inferred that some additional effective high-temperature lubricants were generated in the wear track during the high-temperature sliding process. The related features are discussed in detail in the following section.
Tribological behaviors of the coatings 3.3.1. Tribological properties of the coatings
In addition to the friction coefficient, wear rate is another important indicator of the tribological performance of a material. As shown in figure 6, the wear rate of the as-sprayed coating increases from 12.51×10 −6 mm 3 N·m at RT to 24.77×10 −6 mm 3 /(N·m) at 200°C and 35.79×10 −6 mm 3 /(N·m) at 400°C. However, the wear of the coating is greatly alleviated as the temperature is increased to 600 and 800°C, and then wear rate is reduced to 0.135-0.288×10 −6 mm 3 /(N·m) at these temperatures. A difference of about two orders of magnitude is observed between the values at 400 and 600°C. Overall, the largest wear rate and the highest friction coefficient at 400°C suggests that the as-sprayed coatings display poor tribological performance at this temperature, in agreement with a previous report [19]. The wear track depth profiles of the coatings are shown in figure 8, which demonstrate a similar tendency of variation as the wear rate. The maximum width and depth of the wear tracks increase with an increase in the temperature from RT to 400°C; however, the depth profiles change from sunken to convex ones at higher temperatures (as pointed by arrows in figure 8), suggesting different wear mechanisms at different test temperatures. Figure 9 exhibits the three-dimensional profile of the wear tracks corresponding to the test temperatures, which can intuitively reflect the wear conditions of the assprayed coating at different temperatures. As the test temperature is increased from RT to 400°C, the wear tracks are found to be deeper and wider, and the average wear volume of the coating changes from 29.7×10 6 μm 3 at RT to 82.7×10 6 μm 3 at 400°C. When the test temperature is increased further to 600 and 800°C, the wear tracks are found to be shallow and distinctly narrow. The average wear volume decreased to 1.4-6.0×10 6 μm 3 in this temperature range, and the wear of the as-sprayed coating was greatly alleviated, implying that the as-sprayed coating exhibits better sliding wear resistance at higher temperatures compared to that in the lower temperature range. Moreover, the wear tracks at higher temperatures display obvious accumulation of the wear debris, as evidenced by numerous protrusions that emerge on the edge of the wear track, as pointed by arrows in figures 9(e), (f).
Wear mechanism of the coatings
In order to explain the wear mechanism of the as-sprayed coatings in a wide temperature range, the coatings after friction tests at different temperatures were characterized by means of SEM, EDS, XRD, and Raman analysis. As indicated in figure 10, the coatings tested at different temperatures exhibit various worn surface morphologies, implying the operation of various wear mechanisms. Moreover, the widths of the wear tracks at low temperatures are higher than those at high temperatures, which can well verify the results in figure 8. For the coating tested at RT, the wear track registers a very rough surface, and numerous spallations (as pointed by arrows in figure 10(a)) can be observed on the worn surface, indicating that many parts of the coating material were pulled off from the contact area during the sliding process. In the regional magnified image ( figure 10(b)), the worn surface displays a brittle fracture, and many nanometer-sized fine particles are distributed in this area (as shown in the inset in figure 10(b)). As reported in previous studies [16], many hard Laves phases with brittle feature exist in the CoMoCrSi alloy, which suffer brittle fracture under alternating applied load during the sliding process, thereby causing fatigue failure. This is more so for the as-sprayed coating with higher hardness owing to the existence of amorphous phases. In fact, the process of fatigue crack propagation involves the augmentation of many crack propagations under constant dynamic loading. In particular, numerous tiny cracks and pores existing within the coating will grow and join with each other under alternating applied load, and then grow toward the contact surface, resulting in final spallation failure [26]. Moreover, the fragments will be subjected to the repeating crush during the sliding process, which results in many fine nanometer-sized particles (as shown in the inset in figure 10(b)). According to the EDS results, these fine particles are enriched in chromium, molybdenum, cobalt and oxygen (as shown in spectrum 1 in figure 11); on the contrary, the smooth area lacks oxygen (as shown in spectrum 2 in figure 11). In general, small particles are oxidized more easily, which may be responsible for this phenomenon. These results suggest that the coating tested at RT exhibits a brittle fracture coupled with abrasive wear mechanism, that is, mechanical wear. At 200°C, the wear track presents a smooth and flat surface (figure 10(c)), as well as numerous obvious grooves (as pointed by arrows in figure 10(d)), suggesting an abrasive wear mechanism. Numerous tiny spallations can also be seen in the wear track (as pointed by arrows in figure 10(d)); the ploughing of the detached hard phases in the wear track during the sliding process could account for the formation of grooves. These spallation areas mainly consist of cobalt, chromium, molybdenum, and oxides on the basis of EDS results (as shown in spectrum 4 in figure 11). Similar to the wear track at RT, the smooth and flat area is enriched in cobalt, chromium, and molybdenum, and lack of oxygen (as shown in spectrum 3 in figure 11).
When the temperature is increased to 400°C, the worn surface exhibits a relatively flat morphology with some dark spots ( figure 10(e)). When a dark area (figure 10(f)) is magnified, in particular, numerous fish-scale patterns (as pointed by arrows in figure 10(f)) appear on the worn surface, which is a sign of adhesion wear. As shown in figure 5, the Vickers hardness values dropped by 23.3% from 855.9±16 HV 5.0 at RT to 656.2±22 HV 5.0 at 400°C, which implying the soften of the coating. It is inferred therefore that the coating has been severely abraded during the sliding process, which can be responsible for the increased friction coefficient and wear rate ( figure 6). Similar to the aforementioned EDS results of the coating tested at RT and 200°C, the metallic elements are abundance in the smooth gray areas (as shown in spectrum 5 in figure 11) and the rough dark areas mainly consist of oxides (as shown in spectrum 6 in figure 11). Further, the high oxidation stability of the Laves phase can account for the much smaller oxidation area on the worn surface (as pointed by arrows in figure 10(e)). It is also worth noting that the fish-scale-patterned area is enriched in chromium and oxygen, and the other smooth portion in the analyzed area is enriched in cobalt, molybdenum, and silicon, according to the elemental maps acquired in the part of figure 10(f) (figure 12), although no obvious differences are noticed between the results of spectrum 5 and spectrum 6. That is, the portions rich in cobalt, molybdenum and silicon exhibit superior anti-wear resistance during the sliding process, which is well consistent with the reports that the hard Laves phase rich in molybdenum and silicon could improve the strength and wear properties of the Cobased matrix [21,25].
In addition, the coatings after friction tests were characterized by XRD to identify the newly formed phases. Comparison of the XRD patterns of the coatings tested at room temperature, 200, and 400°C ( figure 13(a)) with that of as-sprayed coating (figure 2) indicates that no new phase is generated after the friction tests at these temperatures, and the coatings still consist mainly of a solid solution based on Co and Cr, and some amorphous phases. However, the Raman analyses conducted on the different areas in the wear track indicate the generation of some oxides. Particularly, no characteristic peaks can be observed in the spectra corresponding to the light areas in the optical micrographs (spot 1, spot 2, and spot 3 on the worn surface of the coatings tested at RT, 200, and 400°C respectively, as shown in figure 14, implying that no oxidation occurred in these areas and they mainly consist of intermetallic compounds. On the contrary, some new phases can be identified as Co 3 O 4 (Raman peaks located at 194, 488, 522, 618, and 691 cm −1 ) [27,28] and MoO 3 (typical peaks signal at 115, 144, 189, 238, 337, 658, 818, and 995 cm −1 ) [29][30][31] by the Raman analyses in the black areas (as shown in figure 14 (RT, 200°C, and 400°C)), and the results coincides well with the EDS results ( figure 11). It should be noted that the area of the wear track characterized by the micro-Raman analysis is much smaller than that characterized by the XRD analysis, and the quantity of the new oxides is very low; thus, these oxides can only be detected by Raman analysis. As reported by Alexander Renz [19], adherent and stable oxides formed on the Tribaloy ® T400 (with a similar chemical composition to that of Diamalloy 3001) above 540°C. We speculate that these thermodynamically stable Co and Mo oxides, i.e., Co 3 O 4 and MoO 3 may be generated at the flash temperature during the sliding process. Despite having a layer-type structure and good lubrication characteristic [32], these non-uniform and low adherent Co 3 O 4 and MoO 3 oxides can easily be ground off when the coating makes contact with the counterpart, and they can form fragments that act as third bodies and aggravate the abrasion. As a whole, the amount of the new phases generated after the friction tests at RT, 200 and 400°C is too little, and moreover, no complete and continuous tribolayer formed on the worn surface, which can account for the high friction coefficient and wear rate at these temperatures. Moreover, the Vickers hardness of as-sprayed CoMoCrSi coating decreased significantly from RT to 400°C. It can also be inferred that the as-sprayed coatings would be more and more easily grinded off during the sliding process with the temperatures increasing to 400°C, resulting in increased wear rate. Consequently, the wear mechanisms of the as-sprayed CoMoCrSi coatings change from a brittle fracture coupled with abrasive wear at room temperature to abrasive wear at 200°C, to severe abrasion along with adhesive wear due to the softening of the coating at 400°C, indicating a more severe mechanical wear of the coating. It can be concluded that the as-sprayed coatings do not have suitable anti-wear properties at low temperatures.
At 600°C, the worn surface of the coating is relatively smooth and flat, and a layer of a continuous film of various oxides can be apparently seen in the wear track (figures 10(h), (i)). In addition, only a few tiny cracks and slightly abraded patterns (as pointed by arrows in figure 10(i)) can be observed on the worn surface, implying that the wear of the coating is greatly alleviated and the coating exhibits a slightly adhesive wear mechanism. As the friction tests are performed at elevated temperatures under ambient atmosphere, the coatings are subjected to severe oxidation and a number of new peaks appear in the XRD pattern of the coating tested at 600°C ( figure 13(b)). However, the widened diffraction peaks at 43.6°and 78.8°indicate that the coating still contained mainly of CoCr solid solution and some amorphous phases. Furthermore, the intensity of the new peaks is low, suggesting the quantity of these phases is small. In particular, these new peaks could be identified as the Co 3 O 4 (PDF#42-1467), CoMoO 4 (PDF#21-0868), Co 2 SiO 4 (PDF#29-0508), and Co 2 CrO 4 (PDF#24-0326). In addition, the Raman analysis of the coating tested at 600°C effectively confirms the above conclusion. According to the Raman results of different spots on the worn surface ( figure 14 (600°C)), numerous new peaks can be attributed to CoMoO 4 (367, 816, 876, and 937 cm −1 ) [33,34], Co 2 SiO 4 (1352 and 1594 cm −1 ), [35,36] and Co 2 CrO 4 (550 and 680 cm −1 ), apart from the peaks of Co 3 O 4 and MoO 3 mentioned above. As reported in previous study [18], the metallurgical stability of CoMoCrSi coatings at high temperatures is related to the oxidation stability of the Laves phase, and their oxidation products could play an important role in determining their wear characteristics. Moreover, primary elements were uniformly distributed in the oxide layer, according to the result presented in figure 15 (acquired from the portion of figure 10(i)). In addition, the characteristic peaks in the Raman spectra acquired from different spots on the worn surface further well verify the intact stability of the oxide layer. Furthermore, the nano-mechanical properties of the oxide layer are presented in figure 16. The ratio of hardness and elastic modulus (H/E) is consistent with the wear conditions, that a large H/ E ratio value corresponds to high anti-wear resistance [37]. The worn surface at 600°C exhibits higher H/E ratio compared to that of the as-sprayed coating and the worn surface at 800°C. The oxides form a distinct oxide layer (figure 10(i)) with high anti-wear resistance and adhesion in the contact area, is not easily broken into fragments during the sliding process. They act as a protective barrier for CoMoCrSi coatings and reduce direct contact between the coating and its counterpart during the sliding process, resulting in lower friction coefficient and material loss.
The SEM image of the wear track of the coating tested at 800°C (figure 10(h)) shows a much narrower (with the width of ∼120 μm) and smoother wear track, indicating a slight abrasive wear mechanism. Surprisingly, as shown in figure 10(k), numerous polygonal particles, 3-5 μm in size, emerge on the worn surface, demonstrating the crystallization of the materials. Moreover, these polygonal particles are enriched in molybdenum, cobalt, and oxygen (as shown in spectrum 7 in figure 11), which is consider as CoMoO 4 (PDF#21-0868). The gray areas are enriched in chromium, cobalt, and oxygen (as shown in spectrum 8 in figure 11), which is identified as Co 2 CrO 4 (PDF#24-0326). The XRD analysis show numerous new diffraction peaks appear in the XRD pattern of the coating (especially in the range of 10°-45°) after the friction test at 800°C ( figure 13(c)). Moreover, some MoO 3 (PDF#21-0569) could be detected apart from the oxide phases identified in the coating tested at 600°C, indicating that the coating began to crystallize at this temperature, which is consist with the result that a marked exothermal peak appeared at 812°C due to oxidation of the alloy [17]. In addition, these phases can also be detected by Raman analysis, and the characteristic peaks acquired from different spots on the surface show satisfactory repeatability, as shown in figure 14−800°C. In particular, the peaks can be as attributed to Consequently, the polygonal particles and the gray areas on the worn surface are considered to be CoMoO 4 and Co 2 CrO 4 , respectively. It is worth noting that these bimetallic oxides, i.e., CoMoO 4 , Co 2 CrO 4 , and Co 2 SiO 4 belong to the members of the olivine mineral family; they have an orthorhombic structure and consist of a hexagonally closed-packed oxygen array in which half of the octahedral sites are occupied by cobalt atoms [39]. In addition, the intensity of the peaks corresponding to CoMoO 4 , Co 2 CrO 4 , and Co 2 SiO 4 is stronger than that of the sample tested at 600°C, suggesting the big quantity of these bimetallic oxides. The Vickers hardness of coating decreases to 583.9±10 HV 5.0 at 800 o C, suggesting the coating is more easily worn down during the friction process. It can therefore be concluded that those bimetallic oxides play a key role as lubricants at this temperature, leading to a lower friction coefficient than that of the sample tested at 600°C.
To acquire more proof that accounts for the superior friction and wear behavior of the coating tested at 800°C, the cross section of the wear track at 800°C was characterized ( figure 17). A continuous glaze layer with a thickness of 50-100 nm can be obviously seen on the worn surface; it is composed of primary elements apart from oxygen and aluminum according to the EDS result ( figure 17), demonstrating that the counterpart Al 2 O 3 material was transferred to the worn surface. It can be inferred that the glaze layer primarily consists of an amorphous phase with excellent mechanical properties, according to our previous report on the NiCoCrAlYTa coating [40], in case of which the protective layer on the worn surface was found to mainly consist of a significant amount of the amorphous phase and a small amount of the crystal phase. Another interesting fact is that the coating grains under this layer get refined and reach nanometer size (figure 17), which can be attributed to both the actions of the applied load and frictional heat. In addition, the worn surface of the coating tested at 800°C registers a hardness of ∼14.4 GPa and an elastic modulus of ∼237.9 GPa (figure 16), which can be attributed to the glaze layer and fine-grained strengthening that can effectively resist the external indentation. The H/E ratio (0.06) of the worn surface at 800°C is lower than that of the worn surface at 600°C (0.064), which well explains the result that the former possesses a slightly higher wear rate than the latter ( figure 6). The development of a
|
v3-fos-license
|
2020-03-19T10:21:15.488Z
|
2020-03-31T00:00:00.000
|
216296737
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/2053-1591/ab80aa",
"pdf_hash": "71b55ba4f413f8c748a79a94988d0fe31e0ae559",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46469",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "3c677248c3d0b7cb91b983d8806356139209f4a9",
"year": 2020
}
|
pes2o/s2orc
|
Effect of Ce addition on microstructure, mechanical properties and corrosion behavior of Al-Cu-Mn-Mg-Fe alloy
The effects of rare earth Ce on the microstructure, mechanical properties and corrosion behavior of Al-Cu-Mn-Mg-Fe alloys were investigated by means of microstructure analysis, tensile test and electrochemical corrosion test. The research shows that the Al-Cu-Mn-Mg-Fe alloy after low temperature heat treatment mainly contains the S (Al2CuMg) phase, the T (Al20Cu2Mn3) phase, the Al6 (Mn, Fe) phase and the Al7Cu2Fe phase, and the rare earth Ce makes the alloy form the new rare earth phase Al8Cu4Ce. The appearance of this phase has a significant refinement effect on the Al6 (Mn, Fe) phase. Compared with Ce-free, the yield strength and tensile strength of Al-Cu-Mn-Mg-Fe alloy with 0.254 ωt% Ce increased by 7% and 15%, respectively, and the elongation increased from 3.1% to 4.8%. It also has better corrosion resistance, which is represented by the decrease of corrosion current density and positive shift of corrosion potential in Tafel measurement in solutions of different concentrations, and the increase of corrosion impedance in electrochemical impedance spectroscopy test, especially the corrosion current density was reduced by 6.06 μA cm−2 in 3.5% NaCl solution.
Introduction
As an important part of electric vehicles, batteries are the root of electric vehicle power. Due to its high voltage, no memory effect and environmental pollution, high specific energy, flexible design and long life, lithium-ion batteries have become the first choice for electric vehicle power batteries with the improvement of their safety performance and cost reduction [1][2][3][4][5]. As a result, battery case materials have become more and more popular as an important part of square aluminum-shell lithium-ion batteries. However, the traditional aluminum alloy case materials cannot meet the demand for thinning of the battery cases in the future due to low strength or poor thermal stability [6,7]. In order to solve this problem, this paper attempts to improve the mechanical properties and corrosion resistance by adding rare earth Ce.
Rare earth elements have important practical significance for the improvement and optimization of the microstructure and properties of alloys. In recent years, people have improved the various properties of aluminum alloys by adding rare earth elements such as Yb [8], Ce, La [9], Sm, Y, Nd, Gd, Er [10][11][12][13], and microalloying of aluminum alloys has been extensively studied. Subbaiah et al [14] first added rare earth element Sc to aluminum alloys in 1971 to improve the strength of aluminum alloys. Xiao et al [15] added 0.25 ωt% of Ce to the Al-Cu-Mg-Ag alloy, and the yield strength of the alloy increased by 8.5% at room temperature, while the yield strength of the alloy increased by 85% at 350°C. Chen et al [16] studied the effects of La and B elements on grain refinement and mechanical properties of cast Al-Si alloys. It was found that the addition of La and B elements formed LaB 6 particles, which could significantly refine α-Al grains, so that the strength properties, particularly the elongation rate of the cast Al-Si alloy had been greatly improved. Du et al [17,18] studied the effect of rare earth Ce on the as-cast microstructure and properties of aluminum alloy. The Al-Cu-Mn-Mg-Fe alloy with a small amount of Ce formed dense S′ phases after room temperature aging. Besides, during the deformation process, the aging precipitation S′ phases help to improve the mechanical properties of the alloy by Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
hindering the movement of dislocations or defects, especially at high temperatures. Wu et al [13] studied the effect of the addition of rare earth Sm on the microstructure and corrosion properties of AZ292 magnesium alloy. It was found that in the process of solution-ageing, rare earth Sm could promote the precipitation of β-Mg 17 Al 12 phase in the grains, inhibiting the precipitation of the phase on the grain boundaries, so that the β phase became finer and the distribution was more uniform, which significantly reduced the macroscopic corrosion current between the phase and the substrate, so that the addition of an appropriate amount of rare earth Sm could improve the corrosion resistance of the alloy. Chen [19] studied the effect of Ce on the microstructure and properties of Cu-Zn-Mn-alloys, and found that the addition of Ce significantly improved the electrochemical corrosion properties of the alloy. Based on the above-mentioned beneficial effects of rare earth elements on the alloy, we conclude that Ce may have the same improvement effect on the Al-Cu-Mn-Mg-Fe alloy.
In this paper, the appropriate amount of rare earth element Ce was added to Al-Cu-Mn-Mg-Fe alloy, which is used as the material of the lithium ion battery cases, to study the effects of Ce on the microstructure, mechanical properties and electrochemical corrosion properties of the alloy.
Materials and methods
The test raw materials were pure Mg, Al-Cu, Al-Mn, Al-Fe and Al-Ce intermediate alloys, which were smelted in a vacuum induction furnace to obtain an alloy ingot. The ingots were homogenized in a box-based resistance furnace and then air-cooled to room temperature. After the homogenization treatment, hot rolling was performed on a two-roll mill to obtain hot rolled sheet. The hot rolled sheet was then subjected to a stress relief annealing treatment in a box type electric resistance furnace. Cold rolling was performed on a two-roll mill to obtain a cold rolled sheet. After different placement times, the cold-rolled alloy sheet was subjected to a lowtemperature annealing treatment at 150°C for 24 h. The Ce in the sample is 0.254 ωt% , and the actual composition of the alloy is shown in table 1.
The grain structure of the cross section of the rolled sheet along the vertical rolling direction was observed by a metallographic microscope (OM) under polarized light. The phase of the alloy was analyzed using an x-ray diffractometer (XRD, Rigaku D/max 2500, CuKα, λ=1.54056 Å). The transmission electron microscope and its accompanying energy spectrometer (Tecnai G2 F30 TEM) were used to analyze the microstructure and composition of the sample by means of images generated by transmission electron beam or diffraction electron beam. Scanning electron microscopy and its accompanying energy spectrometer (Quanta FEG 450 SEM) were used to observe the microstructure and composition of the sample, as well as the fracture morphology of tensile sample and the surface morphology of corroded sample.
The tensile test of the alloy was carried out in accordance with standard ASTM E8/E8M-13a. The fracture morphology of the tensile sample was observed using a secondary electron imaging mode of a scanning electron microscope. The aluminum alloy etching test was carried out according to the national standard (JB/T 7901-1999). The corrosion surface topography of the sample was observed using a backscatter mode of a scanning electron microscope. The Tafel measurement has a scan rate of 1 mV s −1 and a scan range of −1 V to 0 V. The test was performed at room temperature (25°C) with a mass fraction of 3.5% and 0.5% NaCl solution. The electrochemical impedance spectroscopy (EIS) test was performed using the same test apparatus as the Tafel measurement. The test temperature was room temperature (25°C). Before the EIS test, the open-circuit potential-time curve of the sample was tested to determine its open circuit potential (Ecorr). The initial potential of the test was Ecorr with an amplitude of 5 mV and a frequency of 100 KHz to 10 mHz.
Results and discussion
3.1. Microstructure Figure 1 is a graph showing the equilibrium phase content of Al-Cu-Mn-Mg-Fe alloy with nominal composition as a function of temperature by thermodynamic simulation using phase diagram calculation software JMatPro. The equilibrium phase diagrams of the elements Al, Cu, Mn, Mg and Fe were calculated. It can be seen that the Al 6 Mn phases have the highest content among several intermetallic compounds in the stable state of the alloy. A certain amount of Fe has been added to the alloy, and it is foreseeable that the Al 6 Mn particles in the actual alloy should exist in the form of Al 6 (Mn, Fe) phases [20,21]. When the temperature is lower than 395°C, the S (Al 2 CuMg) phase appears, and the content of the S phase and the Al 6 Mn phase increases as the temperature decreases, and then tends to be stable. When the temperature is below 380 degrees C, the content of the Al 7 Cu 2 Fe phase and T (Al 20 Cu 2 Mn 3 ) phase decreases, the T (Al 20 Cu 2 Mn 3 ) phase disappears at less than 280 degrees C, and the Al 7 Cu 2 Fe phase disappears at less than 250 degrees C. In the equilibrium state, the calculated equilibrium microstructure of Al-Cu-Mn-Mg-Fe alloy at 300°C mainly consists of alpha-al phase, Al 6 Mn phase, S phase, T phase and Al 7 Cu 2 Fe phase. In the actual production process, the cooling speed is faster, which may be different from this. Figure 2 shows topographical images of the polarized metallographic phase of the Al-Cu-Mn-Mg-Fe alloy in the vertical rolling direction after aging at room temperature before and after Ce addition. Due to the effect of rolling stress, the grains were stretched along the rolling direction, showing a finer fibrous tissue. According to figures 2(a) and (b), it can be found that when the magnification is the same, compared with the alloy without Ce, the grain size of the 0.254ωt% Ce alloy is much finer, indicating that rare earth Ce has a significant effect on the grain refinement. Due to the addition of Ce, a new rare earth-rich phase Al 8 Cu 4 Ce was formed. Al 8 Cu 4 Ce belongs to the tetragonal crystal system, and its lattice parameters are a=0.8824 nm and c=0.5158 nm. As an intermetallic compound, Al 8 Cu 4 Ce has a relatively high melting point and will have a certain effect on the mechanical properties of Al-Cu-Mn-Mg-Fe [22].
The phase composition of the Al-Cu-Mn-Mg-Fe alloy before and after the addition of Ce was analyzed by XRD, and the results were shown in figure 3. The main constituents of the alloy in which Ce is not added and added with 0.254 ωt% Ce are α-Al phase, Al 6 Mn or Al 6 (Mn, Fe) phase, Al 2 CuMg phase and Al 7 Cu 2 Fe phase. Due to the addition of Ce, a new rare earth-rich phase Al 8 Cu 4 Ce is formed. Due to the large chemical affinity between Ce and Cu atoms, there is a strong interaction between Ce and Cu [23,24]. Therefore, when Ce is added to the Cu-containing aluminum alloy, a rare earth-rich Al 8 Cu 4 Ce phase is formed. As a new forming phase, it will have a certain influence on the microstructure, mechanical properties and corrosion properties of Al-Cu-Mn-Mg-Fe alloy. Figure 4 shows SEM images of the microstructure of Al-Cu-Mn-Mg-Fe alloy before and after the addition of Ce. It is found that different Mn/Fe ratios would affect the morphology of Fe-enriched intermetallic compounds, which in turn affect the mechanical properties of the alloy, consistent with the results of Shabestari and Malekan [25]. The coarse AlMnFe particles are broken into smaller irregular particles during deformation such as hot rolling and cold rolling. In alloys without Ce, the main intermetallic compounds are gray with irregularly shaped particles A, and the energy spectrum indicates that the intermetallic compounds are Al 6 (Mn, Fe). After the addition of 0.254 ωt% Ce, some irregularly shaped white intermetallic compounds B appear, and the energy spectrum indicats that such intermetallic compounds are Al 8 Cu 4 Ce [26,27]. Comparing figures 4(a) and (b), it can be seen that the size of Al 6 (Mn, Fe) intermetallic compound of the alloy is significantly reduced after adding 0.254ωt% Ce. According to figures 4(c) and (d), it can be seen that the diffraction peaks of the alloy after adding 0.254ωt% Ce are denser than those of the alloy without Ce, that is, the addition of Ce will increase the solid solubility of Mn in the aluminum matrix, thus refining the iron-rich phase Al 6 (Mn, Fe) to some extent, which is consistent with the research results of Jiang et al [28]. Figure 5 is a TEM morphology and measurement specification diagram of Al-Cu-Mn-Mg-Fe alloy before and after Ce is added. It can be seen that a large number of rod-shaped and disc-shaped intermetallic compounds are present in the alloy. The energy spectrum shows that the rod-shaped intermetallic compounds are Al 20 Cu 2 Mn 3 phases, the disc-shaped intermetallic compounds are Al 2 CuMg (S) phases, and the intermetallic compounds A are Al 6 (Mn, Fe) phases [29]. It can be seen from figures 5(a) and (b) that the alloy without Ce contains more S phases, the alloy with 0.254ωt% Ce contains more T phases. In addition, through the morphology of the T and S phases in the alloy before and after Ce addition, the addition of 0.254ωt% Ce refines the S and T phases of the alloy.
Mechanical properties
In order to study the fracture mechanism of the alloy, the fracture morphology of the alloy was observed by SEM. Figure 6 shows SEM images of the fracture surface of Al-Cu-Mn-Mg-Fe alloy before and after Ce addition. Some larger micron-scale dimples can be found on the fracture surface, as well as submicron, nanoscale fine dimples that are smaller in size distributed between larger dimples. There are a large number of intermetallic compounds with large sizes in the alloy without Ce and the energy spectrum shows that these intermetallic compounds are Al 6 (Mn, Fe), see figure 4(c). Similar to the study by Chai et al [30], large dimples are generally formed at the larger intermetallic compounds Al 6 (Mn, Fe) and Al 8 Cu 4 Ce, while smaller dimples may be formed at smaller internal phases (e.g. T phases and and S phases). Al 6 (Mn, Fe) is a brittle phase, which will significantly deteriorate the mechanical properties of the alloy when it is larger. Table 2 shows the tensile mechanical properties of Al-Cu-Mn-Mg-Fe alloy before and after the addition of Ce at room temperature. It can be seen that the tensile strength, yield strength and elongation of the alloy after adding 0.254ωt% Ce have increased to some extent. Compared with the absence of Ce, the alloy with 0.254 ωt% Ce increased the yield strength and tensile strength by 7% and 15%, respectively, and the elongation increased from 3.1% to 4.8%. It is shown that rare earth Ce can improve the comprehensive mechanical properties of Al-Cu-Mn-Mg-Fe alloy at room temperature. This is because the addition of a small amount of Ce significantly refines the Al 6 (Mn, Fe) intermetallic compound and promotes the formation of more Al 20 Cu 2 Mn 3 phase, thereby significantly improving the overall mechanical properties of the alloy.
As mentioned above, Ce alloying can promote aging precipitation and S phase growth. Aging precipitates play an important role in enhancing the mechanical properties of Al-Cu-Mn-Mg-Fe alloys by preventing the movement of dislocations or defects. On the other hand, Ce alloying can significantly refine Al 6 (Mn, Fe) precipitation.
Corrosion performance
In order to evaluate the effect of rare earth Ce on the corrosion resistance of Al-Cu-Mn-Mg-Fe alloy, we tested the Tafel measurement of the alloy before and after the addition of Ce at room temperature. The Tafel measurement of the Al-Cu-Mn-Mg-Fe alloy before and after the addition of Ce show the same morphology, which is the E-I behavior curve of typical aluminum alloy corrosion. The cathode branch is controlled by the hydrogen evolution reaction and exhibits a linear behavior, and the hydrogen evolution reaction process can be represented by the formula (1) [31]. The anode current density increases as the anode overpotential increases, indicating that the anode process is controlled by the activation process. In the Cl − -containing solution, this anode process can be represented by the formulas (2) to (4). ( ) Figure 7 shows the SEM surface morphology of the Al-Cu-Mn-Mg-Fe alloy before and after the addition of Ce through the Tafel measurement in different concentrations of NaCl solution at room temperature. It can be seen from the figure 7 that the corrosion morphology of Al-Cu-Mn-Mg-Fe alloys is quite different in the 3.5% NaCl solution and the 0.5% NaCl solution. After the Tafel measurement, a large concentration of NaCl solution form more corrosion convex hulls, and the convex hulls appear white. This is because the main component of the convex hull is non-conductive Al 2 O 3 . Under the irradiation of the electron beam, since there is no conductive path, the charge is concentrated on the surface of Al 2 O 3 and appears white. Compared with the alloy without Ce, the alloy with 0.254ωt% Ce is significantly less in the amount of corrosion convex hull, which is consistent with the result that the corrosion current density of 0.254ωt% Ce alloy is significantly reduced. In other words, in 3.5% NaCl solution, the alloy with appropriate amount of Ce can improve the corrosion performance of the alloy. In the 0.5% NaCl solution, the corrosion degree of the corrosion surface of the alloy is significantly lighter than that of the 3.5% NaCl solution, indicating that the NaCl content in the solution is an important factor affecting the corrosion performance of the alloy. The higher the NaCl concentration, the greater the corrosion degree of the alloy. Comparing figures 7(a), (c) with figures 7(b), (d), it can be found that, compared to 3.5% NaCl solution, rare earth Ce has little effect on the corrosive surface profile of the alloy in 0.5% NaCl solution, which is also consistent with the previous result that the alloy with rare earth Ce has little influence on the corrosion current density in 0.5% NaCl solution. Figure 8 is a Tafel polarization curve of an Al-Cu-Mn-Mg-Fe alloy in a 0.5% NaCl solution and a 3.5% NaCl solution before and after Ce is added at room temperature. The polarization curves before and after the addition of Ce showed similar morphology, and both showed the activation control process of the cathode and anode. Table 3 shows the corresponding electrochemical parameters calculated by the Tafel method. For alloys without Ce, the corrosion potential (Ecorr) shifts negatively and the corrosion current density (Icorr) increases with increasing NaCl solution concentration, indicating that the higher the NaCl concentration, the greater the corrosion degree of the alloy, which is consistent with the above observation results of SEM morphology. Compared with the alloy without Ce, the corrosion potential (Ecorr) is shifted while the corrosion current density (Icorr) is decreased in the same NaCl test solution. When the solution concentration increased from 0.5% to 3.5%, the corrosion current density (Icorr) without Ce alloy increased by 141.92%, and the corrosion current density (Icorr) with 0.254ωt% Ce alloy increased by 56.75%. Therefore, compared with the alloy without Ce, the alloy with 0.254ωt% Ce has better corrosion resistance, that is, the addition of rare earth Ce can improve the corrosion resistance of the alloy. Figure 9 shows the electrochemical impedance spectroscopy of Al-Cu-Mn-Mg-Fe alloy after soaking in 0.5% NaCl solution for 1 h before and after adding Ce at room temperature. Figures 9(a) and (b) are Bode diagrams. The amplitude-frequency diagram 9 (a) shows a straight line with a slope of approximately −1, indicating that the impedance of the system is under the capacitive reactance control at this frequency range. Although the Figure 9(c) is a Nyquist diagram composed of capacitive impedance arc and low-frequency diffusion impedance. A larger capacitive reactance radius represents a larger impedance, i.e. a greater electrochemical reaction resistance. Compared with the absence of Ce, the alloy with 0.254 ωt% Ce has a larger impedance, indicating that the addition of rare earth Ce can optimize the corrosion resistance of the alloy. Figure 10 is an equivalent circuit diagram, and its resistance impedance results are also fitted based on these two circuit diagrams. The electrochemical parameters fitted according to the equivalent circuit are shown in table 4. The solution resistance Rs of the Al-Cu-Mn-Mg-Fe alloy does not change much before and after Ce is added, which indicates that the corrosion solution system is relatively stable. Compared to the Al-Cu-Mn-Mg-Fe alloy with 0.254ωt% Ce, the charge transfer resistance of the Al-Cu-Mn-Mg-Fe alloy without Ce is smaller, indicating that the charge transfer speed of the alloy is relatively fast, its corrosion resistance is poor, Wp is smaller, the corrosion product is easier to diffuse, and the corresponding alloy is easier to corrode. Therefore, the corrosion resistance of Al-Cu-Mn-Mg-Fe alloy with 0.254ωt% Ce is stronger, which is consistent with the experimental impedance spectrum results.
Typically, the intermetallic compound and matrix in the alloy have different corrosion potentials. Therefore, when the alloy is placed in a corrosive environment, many corrosion galvanic couples are formed on the surface to accelerate the corrosion of the alloy. When the size of the intermetallic compound is large, it adversely affects the corrosion performance of the alloy. We have demonstrated that the microalloying of the Ce element significantly refines the Al 6 (Mn, Fe) intermetallic compound. Therefore, Ce microalloying can improve the corrosion resistance of the alloy. On the other hand, Ce in the alloy during the etching process may dissolve and redeposit and concentrate on the surface of the alloy. This contributes to the formation of a continuous passivation film, weakening the influence of Cl − , thereby contributing to an improvement in corrosion resistance.
Conclusion
According to the experimental results, the following conclusions are drawn: (2) For the low-temperature annealed Al-Cu-Mn-Mg-Fe alloy, the tensile strength of alloys added with rare earth Ce increases. Compared with the Ce-free alloy, the yield strength and tensile strength of 0.254 ωt% Ce microalloyed alloy increased by 7% and 15%, respectively, and the elongation increased from 3.1% to 4.8%.
The rare earth Ce can be used to improve the comprehensive mechanical properties of Al-Cu-Mn-Mg-Fe alloy at room temperature.
(3) For the low-temperature annealed Al-Cu-Mn-Mg-Fe alloy, the Tafel measurement indicates that the corrosion potential of the addition of 0.254ωt% Ce alloy is positively shifted, and the corrosion current density is reduced, i.e., it has better corrosion resistance. Electrochemical impedance spectroscopy indicates that the addition of 0.254ωt% Ce alloy has a large corrosion resistance, indicating that the proper amount of rare earth Ce can improve the corrosion performance of Al-Cu-Mn-Mg-Fe alloy.
|
v3-fos-license
|
2023-07-19T15:19:13.431Z
|
2023-07-01T00:00:00.000
|
259971714
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/28/14/5446/pdf?version=1689516140",
"pdf_hash": "8dd03e6e163b0576103eb0eba5207f5c0fb45894",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46471",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "111312d321375087c132ad5fb7b9c6e536de1269",
"year": 2023
}
|
pes2o/s2orc
|
The Enhanced Durability of AgCu Nanoparticle Coatings for Antibacterial Nonwoven Air Conditioner Filters
Antibacterial nonwoven fabrics, incorporated with Ag, have been applied as masks and air conditioner filters to prevent the spread of disease from airborne respiratory pathogens. In this work, we present a comparison study of Ag ions: Ag and AgCu nanoparticles (NPs) coated onto nonwoven fabrics intended for use as air conditioner antibacterial filters. We illustrate their color changes and durability running in air conditioners using antibacterial activity testing and X-ray Photoelectron Spectroscopic (XPS) analysis. We found that AgCu NPs showed the best antibacterial efficacy and durability. XPS analysis indicated that the Ag concentration, on both the AgCu and Ag- NP-coated fibers, changed little. On the contrary, the Ag concentration on Ag ion-coated fibers decreased by ~30%, and the coated NPs aggregated over time. The color change in AgCu NP-coated fabric, from yellow to white, is caused by oxide shell formation over the NPs, with nearly 46% oxidized silver. Our results, both from antibacterial evaluation and wind blowing tests, indicate that AgCu NP-coated fibers have higher durability, while Ag ion-coated fibers have little durability in such applications. The enhanced durability of the AgCu NP-coated antibacterial fabrics can be attributed to stronger NP–fiber interactions and greater ion release.
Apart from the monometallic nanoparticles applied as textile finishing agents, bimetallic NPs have received much interest because of their optical, electrical, magnetic, and catalytic capabilities, and especially their excellent antibacterial properties, which differ dramatically from their monometallic counterparts in most circumstances [25,26]. Bimetallic NPs are made by mixing two distinct metal elements to produce a variety of morphologies and architectures basically synthesized by chemical reduction and biosynthesis in recent studies [27,28]. AgCu NP, as a typical alloy, has been thoroughly studied by us [29][30][31] and others [32][33][34][35][36][37][38] and has been found to possess enhanced antibacterial efficacy, greater than either Ag or Cu NPs, used alone or mixed together [32,39]. This has resulted in reduced cytotoxicity [40] as well.
The durability of an antibacterial fiber is associated with its application requirements (e.g., anti-washing is important for the textiles and filters used in water treatment [41,42], but anti-wind blowing is more important for air conditioner filters). Changing the shape using sputtering deposition and plasma, especially air-pressure plasma treatment, combined with roll-to-roll treatment, the NPs coating can be competitive processes for some specific applications.
Although Ag NP coatings have been used for air conditioner filters, there are limited data on their durability. At the same time, our recent studies of AgCu NPs indicated that they showed excellent antibacterial efficacy, which can reduce Ag consumption if they meet the same antibacterial efficacy. Our intention here is focused on evaluating the durability of AgCu NP coatings for nonwoven fabrics as air filters. We also explored their advantages, comparing Ag NPs and Ag ions with PVP-PVA stabilizers, coated onto nonwoven fabric, without using any surface modification processes or special binders. We found that AgCu NP-coated fabrics showed the highest durability, while Ag ion-coated fabrics, with and without PVP-PVA stabilizers, showed the poorest.
Results
Firstly, we checked the appearance of the coated nonwoven fabrics as deposited and after running for various periods (0-30 days). After running as depicted in Figure 1a, Figure 1b shows the color change in the various Ag-coated fabrics at different times. After a month, the color of the Ag NPs changed from beige to light brown ( Table 1), while that of the AgCu NPs changed from yellow to white. In the case of the fabric coated with Ag ions and PVP-PVA stabilizer, after one month, the color changed from light beige to light gray; without PVA-PVP stabilizer, its color changed from white to light gray.
protein-coated fibers [66], plasma treatment [67][68][69], etc. For practical applications, wat based NPs dispersion using an appropriate binder is the primary technique, and wh using spu ering deposition and plasma, especially air-pressure plasma treatment, co bined with roll-to-roll treatment, the NPs coating can be competitive processes for so specific applications.
Although Ag NP coatings have been used for air conditioner filters, there are limit data on their durability. At the same time, our recent studies of AgCu NPs indicated th they showed excellent antibacterial efficacy, which can reduce Ag consumption if th meet the same antibacterial efficacy. Our intention here is focused on evaluating the d rability of AgCu NP coatings for nonwoven fabrics as air filters. We also explored th advantages, comparing Ag NPs and Ag ions with PVP-PVA stabilizers, coated on nonwoven fabric, without using any surface modification processes or special binders. W found that AgCu NP-coated fabrics showed the highest durability, while Ag ion-coat fabrics, with and without PVP-PVA stabilizers, showed the poorest.
Results
Firstly, we checked the appearance of the coated nonwoven fabrics as deposited a after running for various periods (0-30 days). After running as depicted in Figure 1a, F ure 1b shows the color change in the various Ag-coated fabrics at different times. Afte month, the color of the Ag NPs changed from beige to light brown ( Table 1), while that the AgCu NPs changed from yellow to white. In the case of the fabric coated with Ag io and PVP-PVA stabilizer, after one month, the color changed from light beige to light gr without PVA-PVP stabilizer, its color changed from white to light gray. Figure S1 in the Supplementary Information. We found that, with a certain amount Figure S1 in the Supplementary Information. We found that, with a certain amount of deionized water (e.g., fifty µL) to wet the fabric, the antibacterial performance was more obvious and easier to compare; therefore, we used this modification to present the antibacterial activities of the fabric samples. Figure 3a shows that the initial antibacterial efficacies for S. aureus are better for both AgCu NPs and Ag NPs compared to Ag ioncoated fibers. Figure 3b shows that there are similar antibacterial efficacies for all samples, except for PVP/PVA and Ag ion-coated fibers, which have better efficacy against E. coli bacteria at the initial stage. However, after running for half a month, little antibacterial efficacy remained for the Ag ion-coated fabric, regardless of whether it was coated with PVP-PVA or not for both bacterial strains. For Ag NP-coated fabrics, there was an increase in antibacterial efficacy after two weeks, followed by a decrease after one month of running. In contrast, for the AgCu NP-coated fabric, the antibacterial efficacy increased after running for both a half and a full month. AgCu NP-coated fabric had the best antibacterial efficacy and durability, followed by Ag NPs, while the worst cases were Ag ion-coated fabrics, with or without stabilizers. deionized water (e.g., fifty µL) to wet the fabric, the antibacterial performance was more obvious and easier to compare; therefore, we used this modification to present the antibacterial activities of the fabric samples. Figure 3a shows that the initial antibacterial efficacies for S. aureus are be er for both AgCu NPs and Ag NPs compared to Ag ion-coated fibers. Figure 3b shows that there are similar antibacterial efficacies for all samples, except for PVP/PVA and Ag ion-coated fibers, which have be er efficacy against E. coli bacteria at the initial stage. However, after running for half a month, li le antibacterial efficacy remained for the Ag ion-coated fabric, regardless of whether it was coated with PVP-PVA or not for both bacterial strains. For Ag NP-coated fabrics, there was an increase in antibacterial efficacy after two weeks, followed by a decrease after one month of running. In contrast, for the AgCu NP-coated fabric, the antibacterial efficacy increased after running for both a half and a full month. AgCu NP-coated fabric had the best antibacterial efficacy and durability, followed by Ag NPs, while the worst cases were Ag ion-coated fabrics, with or without stabilizers. Compared to the control sample, **** denotes a statistical significance of p < 0.0001; ** denotes a statistical significance of p < 0.01; and * denotes a statistical significance of p < 0.05, while 'ns' represents p > 0.05. n = 3. Error bars show standard errors of the mean. The red ✕ indicates that there is no inhibition zone for this sample. Figure 4 shows a survey and high-resolution C1s, O1s, Ag3d, and Cu2p XPS spectra for the AgCu NP-coated fabric. They show the presence of -COOH/C=O (290/289 eV) and -COH (286.7 eV) peaks in both C1s and O1s, besides the C1s C-C/C-H peak used for energy calibration, consistent with the fabric composition of PE/PP. The Ag3d5/2 peak is for PVP/PVA and Ag ion-coated fibers, which have be er efficacy against E. coli bacte at the initial stage. However, after running for half a month, li le antibacterial effica remained for the Ag ion-coated fabric, regardless of whether it was coated with PVP-PV or not for both bacterial strains. For Ag NP-coated fabrics, there was an increase in an bacterial efficacy after two weeks, followed by a decrease after one month of running. contrast, for the AgCu NP-coated fabric, the antibacterial efficacy increased after runni for both a half and a full month. AgCu NP-coated fabric had the best antibacterial effica and durability, followed by Ag NPs, while the worst cases were Ag ion-coated fabri with or without stabilizers. Compared to the control sample, **** denotes a statistical significance of p < 0.0001; ** denotes a statistical significance of p < 0.01; and * denotes a statistical significance of p < 0.05, while 'ns' represents p > 0.05. n = 3. Error bars show standard errors of the mean. The red × indicates that there is no inhibition zone for this sample. Figure 4 shows a survey and high-resolution C1s, O1s, Ag3d, and Cu2p XPS spectra for the AgCu NP-coated fabric. They show the presence of -COOH/C=O (~290/289 eV) and -COH (286.7 eV) peaks in both C1s and O1s, besides the C1s C-C/C-H peak used for energy calibration, consistent with the fabric composition of PE/PP. The Ag3d 5/2 peak is located at 367.5 eV for the as-prepared sample, while the Ag-O (or -OH) peak appears at 369 eV after running for one month. Cu 2+ is seen to exist initially, as seen from the presence of the shakeup satellite peak [70], with little change after running for a month. A comparison of Ag3d for the Ag-coated fibers, as shown in Figure 5, indicates that there was some oxidation, except for the pure Ag NP-coated samples, even after running for one month. A higher concentration of the Ag ion-coated fabrics appeared at the initial stage (as deposited, from Table 2) than that of Ag and AgCu NPs, although the same amount of Ag was deposited; this can be caused by the higher surface-volume ratio of Ag ion-coated samples than that of both Ag and AgCu NPs due to larger NPs. This means the smaller the nanoparticles, the stronger the electron emission from NPs [71,72]. located at 367.5 eV for the as-prepared sample, while the Ag-O (or -OH) peak appears at 369 eV after running for one month. Cu 2+ is seen to exist initially, as seen from the presence of the shakeup satellite peak [70], with li le change after running for a month. A comparison of Ag3d for the Ag-coated fibers, as shown in Figure 5, indicates that there was some oxidation, except for the pure Ag NP-coated samples, even after running for one month.
A higher concentration of the Ag ion-coated fabrics appeared at the initial stage (as deposited, from Table 2) than that of Ag and AgCu NPs, although the same amount of Ag was deposited; this can be caused by the higher surface-volume ratio of Ag ion-coated samples than that of both Ag and AgCu NPs due to larger NPs. This means the smaller the nanoparticles, the stronger the electron emission from NPs [71,72]. Figures S2-S4. Chemical compositional estimated using XPS sensitivity factors, are found in Table 2. It is seen that the Ag tration is ~0.2-0.3% for Ag and AgCu NP-coated fibers and that there was minima after running for one month, while that for Ag ion-coated fibers decreased.
These color changes and analytic data (XPS and antibacterial activity) clear that the initial color of all coated nonwoven fabrics was changed, especially for th NP-coated fabrics, which look like uncoated fabrics. Both Ag and AgCu NP-coate Detailed XPS spectra are shown in Figures S2-S4. Chemical compositional changes, estimated using XPS sensitivity factors, are found in Table 2. It is seen that the Ag concentration is~0.2-0.3% for Ag and AgCu NP-coated fibers and that there was minimal change after running for one month, while that for Ag ion-coated fibers decreased.
These color changes and analytic data (XPS and antibacterial activity) clearly show that the initial color of all coated nonwoven fabrics was changed, especially for the AgCu NP-coated fabrics, which look like uncoated fabrics. Both Ag and AgCu NP-coated fabrics showed no loss of Ag after running for one month, while Ag ion-coated samples, with or without PVP/PVA, showed some loss of Ag.
Discussion
The color changes in both uncoated and Ag-coated fabrics, induced by air currents in the air conditioner, as shown in Table 1, can be summarized as follows: the color change in the uncoated nonwoven fabric changes from original white to a quite slight gray. This variation is probably due to the deposition of particulate matter during air flow. While the color of Ag NP-coated fabrics, which changes from beige to light brown after running for one month, is principally caused by Ag NP aggregation, which is assisted by air flow. Ag ion-coated fibers, without and with PVP-PVA stabilizers, change from white (or light yellow) to light brown after running for one month, caused by Ag NP aggregation, which we have found previously [73]. This means that no aggregation occurred for the AgCu NPcoated fabrics by air flow, suggesting that the adhesion of AgCu NPs to the fabric is stronger than that of Ag NPs and Ag ions. The initial (as deposited) yellow color of the Ag ion coated with the PVP-PVA sample is due to the reduction in PVP to form nanoparticles [29,30].
Surface chemical analysis by XPS, as shown in Figure 4 and Table 2, indicates that both AgCu-and Ag NP-coated fabrics suffer little loss of NPs, but there is some loss of Ag for the Ag ion-coated fabrics after running for one month. This is further confirmed by the antibacterial data in Figures 2 and 3. The loss of color of AgCu NPs is caused by the formation of an oxidation shell, as confirmed by Ag 3d XPS in Figures 4 and 5, while the antibacterial activity enhances, consistent with Table 1, due to oxidation shell formation [74]. This is because oxidized Ag in AgCu NPs is favorable for Ag ion release in contact with bacteria [75]. However, it is well known that the aggregation of Ag NPs can also result, in some circumstances, in a decrease in antibacterial efficacy [76,77], which may be the main reason for the degradation of Ag NP-coated fabrics.
The XPS results presented in Figure 5 and Table 2 suggest that the adhesion of the Ag ion-coated fabric samples is very weak, leading to the loss of Ag and also NP aggregation on wind blowing. It is well known that fibers, when immersed in solutions of AgNO 3 , in the absence of added reducing agents, undergo a reduction reaction from Ag ions to metallic Ag (Ag + to Ag 0 ) [78] due to the presence of functional groups (C=O and C-O) on the fiber surface, as shown by our XPS analysis (Figure 4). The loss of Ag from the Ag ion-coated fabric samples is due to zerovalent Ag having a weak interaction with fibers [79,80], which is a major reason for the loss of Ag from the air current exposure. The antibacterial test data presented in Figures 2 and 3 also confirm Ag loss by air currents for the Ag ion-coated samples.
However, the loss of Ag concentration is minimal for Ag and AgCu NPs, implying that they have a stronger interaction with the fabrics when the wind blows. This enhanced interaction is attributed to the presence of PVP-PVA that can form hydrogen bonds with the fibers [81]. Therefore, the fading of the yellow color of AgCu NP-coated fibers does not affect their application as antibacterial filters, but this partial oxidization process makes a difference in improving the antimicrobial effect, which appears to result from preventing the aggregation of the AgCu NPs.
For the antibacterial efficacy change shown in Figures 2 and 3, it is clearly indicated that both Ag-and AgCu NP-coated fabrics exhibited better antibacterial efficacy for S. aureus than for E. coli. This is different than the case of Ag and AgCu NPs and Ag ions in aqueous solutions. Secondly, the antibacterial efficacy of Ag NPs is increased in the first 15-day running period, which then decreases after running for one month. The increased antibacterial efficacy of Ag NP-coated fabrics for the first 15 days of airflow may be caused by NP surface oxidation layer formation during that time. The decreased antibacterial efficacy of Ag NP-coated fabrics can be attributed to the Ag NPs aggregation; this is consistent with the Ag NP color change. As one can see in Figures 2 and 3, the antibacterial efficacy of AgCu NP-coated fabric increases for one month, which can be attributed to both surface oxidation and a lack of aggregation. The major reason for the decrease in antibacterial efficacy for the Ag ion-coated fabrics, both with and without PVP-PVA stabilizers during airflow, appears to be a loss of Ag due to a weak interaction of Ag with fibers.
It Is well known that antibacterial efficacy is dependent on the Ag NPs' size [82][83][84][85], shape, and surface chemistry [86][87][88][89]. The smaller the size, the higher the antibacterial efficacy [90,91] under aqueous environmental conditions. In this work, Ag NPs (12 nm) [29] and AgCu NPs (15 nm; a TEM photomicrograph can be found in Figure S5 in the Supplementary Material) were used. There is no available TEM data for Ag ion-coated fabrics; however, based on the color of the Ag ion-coated fabrics, the average size of Ag may be smaller than 5 nm (without PVA-PVP) and 6-10 nm (with PVA-PVP stabilizers). For the coated fabrics, the antibacterial efficacies, determined from the ZOI diameter against two bacteria, are mainly affected by two factors: Ag and Cu ion release [92,93] and the contact killing mechanism [94,95]. Since the ZOI diameter depends on diffusion, this means that both Ag-and AgCu-coated fabrics have more NPs and ions diffusing than the Ag-coated fabrics, both initially and after running for a month (Figure 3). It appears that Ag ion release plays a more important role in the antibacterial efficacy of the coated fabrics because there is stronger adhesion of the AgCu and Ag NPs to the fabrics, as confirmed by XPS and ZOI testing.
Based on this analysis, a schematic diagram for the coated fiber color and property changes is found in Figure 6. The most notable one is for the AgCu NP-coated samples among these Ag-coated antibacterial fabrics, which not only kept their mechanical durability but also improved their antibacterial efficiency with the moderate oxidation of Ag. It is confirmed by the change to the Ag3d peak, the little change to the Cu2p peak, and the stronger adhesion between antibacterial material and fabric, as determined by ZOI.
against two bacteria, are mainly affected by two factors: Ag and Cu ion release [92,93] and the contact killing mechanism [94,95]. Since the ZOI diameter depends on diffusion, this means that both Ag-and AgCu-coated fabrics have more NPs and ions diffusing than the Ag-coated fabrics, both initially and after running for a month (Figure 3). It appears that Ag ion release plays a more important role in the antibacterial efficacy of the coated fabrics because there is stronger adhesion of the AgCu and Ag NPs to the fabrics, as confirmed by XPS and ZOI testing.
Based on this analysis, a schematic diagram for the coated fiber color and property changes is found in Figure 6. The most notable one is for the AgCu NP-coated samples among these Ag-coated antibacterial fabrics, which not only kept their mechanical durability but also improved their antibacterial efficiency with the moderate oxidation of Ag. It is confirmed by the change to the Ag3d peak, the li le change to the Cu2p peak, and the stronger adhesion between antibacterial material and fabric, as determined by ZOI. The adhesion of Ag NPs on a fiber is likely weaker than that of AgCu since the NPs can move, leading to aggregation. (3) The adhesion of the Ag ions, with and without PVA-PVP stabilizer, is probably very weak, resulting in the loss and aggregation of the Ag NPs on the fibers. The enhanced antibacterial efficacy of the AgCu NPs on the fibers can be due not only to oxidation layer formation to speed up Ag ion release under running conditions but also to Cu enhancing Ag release, which has been found recently [29].
This study provides us with a facile and cost-effective method to maintain stable AgCu NP-based coated antibacterial nonwoven fabric, which can be considered an excellent substitute for colorless antibacterial filters applied in air conditioning to achieve air purification for human health. (2) The adhesion of Ag NPs on a fiber is likely weaker than that of AgCu since the NPs can move, leading to aggregation. (3) The adhesion of the Ag ions, with and without PVA-PVP stabilizer, is probably very weak, resulting in the loss and aggregation of the Ag NPs on the fibers. The enhanced antibacterial efficacy of the AgCu NPs on the fibers can be due not only to oxidation layer formation to speed up Ag ion release under running conditions but also to Cu enhancing Ag release, which has been found recently [29].
This study provides us with a facile and cost-effective method to maintain stable AgCu NP-based coated antibacterial nonwoven fabric, which can be considered an excellent substitute for colorless antibacterial filters applied in air conditioning to achieve air purification for human health.
Sample Preparation
Antibacterial agents: aqueous dispersions of Ag and AgCu NPs were diluted to 200 ppm Ag concentration, with the Cu concentration at 100 ppm, w/v, using deionized water. The composite solution of PVP/PVA and Ag ions with 200 ppm Ag ions was prepared by dissolving AgNO 3 in deionized water with the same amount of PVP and PVA added as in the Ag NP dispersions.
Nonwoven fabric: the fabrics were cut to the same size (30 × 33 cm 2 ) and soaked in the different antibacterial agents for 1 min before the excess liquid was rolled out and the fabrics were dried by atmospheric exposure. The Ag-coated nonwoven fabrics were dried for 24 h in air at room temperature.
Air blowing test: samples of fabric were glued to the air inlet of an air conditioner KFR-35GW/K150+N3, Chigo Air Conditioning Co., Ltd., (Foshan, China) using double-sided adhesive tape. The air conditioner was working continuously for 30 days, and samples were evaluated on days 0, 15, and 30.
Characterization
XPS was conducted on an ESCALab 230i, whose X-ray source was monochromatic Al Kα (1486.7 eV). Survey spectra were conducted with 1.0 eV steps at 100 eV pass energy, while high-resolution spectra were conducted with 0.05 eV steps and a 25-eV pass energy. All spectra were calibrated by placing the C1s peak for C-C/C-H at 284.8 eV.
Antibacterial Evaluations
The antibacterial efficacy of the fabrics was evaluated against Gram-negative Escherichia coli (ATCC 8099) and Gram-positive Staphylococcus aureus (ATCC 6538). The sub-culture of the bacterial colony was made from 3-5 generations of the primary culture. Bacteria were grown overnight on a nutrient-agar media plate. Inoculums of 0.5 McFarland standards (1.5 × 10 8 CFU/mL) were maintained in nutrient broth by picking up a single colony from the sub-culture plate [32], and fifty microliters of bacterial solution were added to 5 mL of sterile saline solution to obtain a bacterial suspension at a concentration of 1.5 × 10 6 CFU/mL for testing. Fabric samples were cut with a 14 mm punch.
Agar dilution is considered to be the gold standard of susceptibility testing or the most accurate way to measure the resistance of bacteria to antibiotics [96]. In this wellknown procedure [97], the agar plate surface was inoculated by spreading a volume of the microbial inoculum over the entire agar surface. Then, samples were placed aseptically, using sterile tweezers, onto the surfaces of ager plates. The Petri dishes were then incubated under suitable conditions [98] (37 • C). The antimicrobial agent diffuses into the agar and inhibits germination and growth of the test microorganism, following which the diameters of the inhibition growth zones are measured by a vernier caliper at three or more locations.
Statistical Studies
Statistical data (average ± SD) analyses were conducted, applying One-Way ANOVA (SPSS software Version 8.0 program). This study considered p < 0.05 for significantly various groups.
Conclusions
The durability of directly deposited Ag and AgCu NPs and Ag ions, by dip-roll processes onto nonwoven fabric for air conditioner applications, has been evaluated by antibacterial efficacy, color change, and XPS analysis. We found that the disappearance of the yellow color of AgCu NP-coated fabrics on air current exposure is attributed to the surface oxidation of AgCu NPs without the degeneration of antibacterial activity, while the decreased antibacterial activity and color change for the Ag NP-and Ag ioncoated fabrics can be attributed to surface Ag NP aggregation and Ag loss. PVP-PVA stabilized AgCu NPs, deposited onto the fabric by dip-rolling, appear to be applicable as air conditioning antibacterial filters, leading to higher durability and enhanced antibacterial efficacy. Overall, this study proposes a facile and inexpensive method to maintain stable NP-coated fabrics without using any surface modification processes or special binders but with the improvement of antimicrobial efficacy in use, which may be an effective solution to the implementation of colorless anti-bacterial filters applied in air conditioners to achieve better air purification, particularly for respiratory health.
Supplementary Materials: The supporting information can be downloaded at https://www.mdpi. com/article/10.3390/molecules28145446/s1, Figure S1: A comparison of Zone of Inhibition of S. aureus on Ag-nonwoven fabrics with 17 different pretreatments: A, B, and C were wet with 50 microliters of deionized water, while 1, 2, and 3 were not, before 18 antibacterial evaluations, in the order of as prepared (A, 1), two weeks (B, 2) and four weeks (C, 3). Figure
|
v3-fos-license
|
2021-05-21T14:01:40.458Z
|
2021-05-20T00:00:00.000
|
234797240
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8136103",
"pdf_hash": "d9a9734215b967fe8ee1e5c7cf0abf4bc25cc3ae",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46472",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "21724a86eb96991e7842a437b23936bec5a6a094",
"year": 2021
}
|
pes2o/s2orc
|
Modeling clonal structure over narrow time frames via circulating tumor DNA in metastatic breast cancer
Background Circulating tumor DNA (ctDNA) offers minimally invasive means to repeatedly interrogate tumor genomes, providing opportunities to monitor clonal dynamics induced by metastasis and therapeutic selective pressures. In metastatic cancers, ctDNA profiling allows for simultaneous analysis of both local and distant sites of recurrence. Despite the promise of ctDNA sampling, its utility in real-time genetic monitoring remains largely unexplored. Methods In this exploratory analysis, we characterize high-frequency ctDNA sample series collected over narrow time frames from seven patients with metastatic triple-negative breast cancer, each undergoing treatment with Cabozantinib, a multi-tyrosine kinase inhibitor (NCT01738438, https://clinicaltrials.gov/ct2/show/NCT01738438). Applying orthogonal whole exome sequencing, ultra-low pass whole genome sequencing, and 396-gene targeted panel sequencing, we analyzed 42 plasma-derived ctDNA libraries, representing 4–8 samples per patient with 6–42 days between samples. Integrating tumor fraction, copy number, and somatic variant information, we model tumor clonal dynamics, predict neoantigens, and evaluate consistency of genomic information from orthogonal assays. Results We measured considerable variation in ctDNA tumor faction in each patient, often conflicting with RECIST imaging response metrics. In orthogonal sequencing, we found high concordance between targeted panel and whole exome sequencing in both variant detection and variant allele frequency estimation (specificity = 95.5%, VAF correlation, r = 0.949), Copy number remained generally stable, despite resolution limitations posed by low tumor fraction. Through modeling, we inferred and tracked distinct clonal populations specific to each patient and built phylogenetic trees revealing alterations in hallmark breast cancer drivers, including TP53, PIK3CA, CDK4, and PTEN. Our modeling revealed varied responses to therapy, with some individuals displaying stable clonal profiles, while others showed signs of substantial expansion or reduction in prevalence, with characteristic alterations of varied literature annotation in relation to the study drug. Finally, we predicted and tracked neoantigen-producing alterations across time, exposing translationally relevant detection patterns. Conclusions Despite technical challenges arising from low tumor content, metastatic ctDNA monitoring can aid our understanding of response and progression, while minimizing patient risk and discomfort. In this study, we demonstrate the potential for high-frequency monitoring of evolving genomic features, providing an important step toward scalable, translational genomics for clinical decision making. Supplementary Information The online version contains supplementary material available at 10.1186/s13073-021-00895-x.
Background
Tumors are known to shed fragments of DNA into the bloodstream through apoptosis and necrosis [1][2][3]. This cell-free DNA, known as circulating tumor DNA (ctDNA), can be acquired minimally invasively through simple blood draws, then isolated from plasma in admixture with cell-free DNA of non-tumor origin. The potential for minimally invasive tumor profiling makes ctDNA an attractive target for biomarker development and serial profiling, especially in metastatic cancers. Despite relative ease of collection, ctDNA assays are challenging due to lower purity relative to tumor tissue samples. For example, estimated ctDNA purity, or tumor fraction (TFx), ranges from <0.01 to 0.80 in large cohorts of metastatic cancer, with most samples with a TFx <0.10 and varying by cancer type [4].
Despite technical challenges of ctDNA, progress has been made in recent years in leveraging plasma samples for clinical and genomic applications using diverse sequencing approaches, including specific mutation tracking, targeted panel sequencing, shallow whole genome sequencing, methylation, and whole exome/genome sequencing. PCR-based strategies demonstrated the ability to precisely track and quantify known variants in metastatic breast cancer [5,6]. Exome-based and targeted panel sequencing strategies have suggested high concordance between alterations discovered in circulating tumor DNA [4], circulating tumor cells [7], and matched tumor biopsies in solid tumors [8] and blood cancers, like multiple myeloma [9,10], where cancer cells are difficult to reach without bone marrow biopsy. Importantly, ctDNA profiles have also demonstrated the capability to capture novel somatic alterations not present in primary cancers [4,11,12]. In metastatic cancer, ctDNA may act as a "sink" of tumor DNA from multiple metastatic sites from which genetic alterations across multiple sites may be simultaneously profiled [13][14][15]. Further, ctDNA tumor fraction levels have been found to correlate with patient outcomes [9,11,[16][17][18], pointing to a potential for broader clinical application of ctDNA assays. Many potential applications are under development, including cancer screening [19], minimal residual disease assessment [20][21][22][23], and tumor monitoring [18].
Circulating tumor DNA analyses offer the potential to monitor tumor genomic features over more narrow time windows, on the order of days to weeks or less, than is logistically or ethically feasible with repeated tissue biopsies. An outstanding question in oncology, and specifically the ctDNA field, is how rapidly tumor genomes evolve under therapeutic selective pressures, and whether this can be detected via ctDNA through the growing number of sequencing approaches. To evaluate this question, we focused on triple-negative breast cancer (TNBC), an aggressive form of breast cancer defined by the lack of expression of three clinically important therapeutic targets, the ER, PR, and HER2 receptors [24]. Metastatic TNBC (mTNBC) is known to shed relatively high amounts of ctDNA [11]. TNBC constitutes around 10-15% of all breast cancer, but may be responsible for upwards of 30% of breast cancer mortality [24][25][26].
In this work, we provide the first comprehensive analysis of ctDNA genetic profiling over narrow time windows in mTNBC. We leverage serial sets of ctDNA collected from patients with mTNBC enrolled in a phase II clinical trial of Cabozantinib, a multi-receptor tyrosine kinase inhibitor, as an exploratory analysis of available samples. These clinical trial samples, whose primary endpoints were previously reported [27], provide a cohort of patients on a uniform and targeted treatment regimen. Using orthogonal sequencing approaches, we demonstrate the feasibility of ctDNA genetic profiling for modeling pan-tumor clonal dynamics, rare variant detection, copy number analysis, and neoantigen prediction. This work was presented in part as a conference abstract [28].
Patient eligibility, selection, and treatment
Individuals were considered eligible for study if they were 18 years of age or older with diagnosed TNBC, designated by the following indications: estrogen receptornegative (ER−; <10% staining by immunohistochemistry [IHC]), progesterone receptor-negative (PR−; <10% staining by IHC), and HER2-negative (0 or 1+ by IHC or fluorescence in situ hybridization [FISH] ratio<2.0). Patients had measurable disease by Response Evaluation Criteria In Solid Tumors (RECIST) version 1.1 and may have received 0 to 3 prior chemotherapeutic regimens for mTNBC. Key exclusion criteria include the following: receiving another investigational agent within 2 weeks of the first dose of cabozantinib, untreated brain metastases, symptomatic brain metastases, or those which required therapy for symptom control, or prior treatment with a MET inhibitor (other than tivantinib ARQ-197) [27].
Patients who met eligibility criteria and consented to participation were enrolled in a single-arm, two-stage phase II study assessing the efficacy of cabozantinib monotherapy in patients with mTNBC (NCT01738438, https://clinicaltrials.gov/ct2/show/NCT01738438). Treatment consisted of oral dosing of cabozantinib at 60mg daily over a 21-day cycle. Patients underwent radiographic restaging at 6 weeks and every 9 weeks thereafter. Patients were enrolled from February 2013 to May 2015. The primary endpoint was the activity of cabozantinib, as defined by objective response rate (ORR) in patients with mTNBC. Predefined secondary endpoints included progression-free survival (PFS), toxicity, and pain. Correlative studies included analysis of MET and phospho-MET expression in archival tumor tissue, and molecular and cellular biomarkers of cabozantinib. The results of this study have been published previousl y [27]. The analyses presented herein are exploratory analyses of existing plasma specimens. Clinicopathologic data were abstracted from the medical record. Research was approved by local human research protections programs and institutional review boards at the Dana-Farber Cancer Institute and Ohio State University, and studies were conducted in accordance with the Declaration of Helsinki.
Sample collection and processing
Plasma was collected at baseline, on day 8 of therapy, on day 1 of each 21-day cycle of therapy, and, if available, at the time of progression. Eight milliliters of the blood was collected in BD brand EDTA vacutainers and processed within 4 h of collection at the Clinical Laboratory Improvement Amendments-certified core in the Steele Laboratories (Massachusetts General Hospital), where the whole blood was separated into cellular fraction and plasma by centrifuging at 1000-1900×g for 10 min at room temperature. Plasma was stored at −80°C.
Extraction and quantification of cfDNA and germline DNA Frozen aliquots of the plasma were thawed at room temperature then centrifuged a second time at 15,000×g for 10 min at room temperature in low-bind tubes to remove residual cells from plasma. cfDNA was extracted from 1 to 7 mL of plasma and eluted into 40-80 μL of re-suspension buffer using the Qiagen Circulating DNA kit on the QIAsymphony liquid handling system. Germline DNA (gDNA) was extracted from 400 μL of the blood and eluted into 200 μL of re-suspension buffer using the Qiasymphony DSP DNA midi kit on the QIAsymphony liquid handling system. Extracted cfDNA and gDNA was frozen at −20°C until ready for further processing. Quantification of extracted cfDNA and gDNA was performed using the PicoGreen (Life Technologies) assay on a Hamilton STAR-line liquid handling system.
Library construction of cfDNA and gDNA
For cfDNA, initial DNA input was normalized to the range 25-52.5 ng in 50 uL of TE buffer (10mM Tris HCl 1mM EDTA, pH 8.0) according to picogreen quantification. For gDNA, an aliquot of gDNA (50-200ng in 50μL) was used as the input into DNA fragmentation (aka shearing). Shearing was performed acoustically using a Covaris focused-ultrasonicator, targeting 150bp fragments. Library preparation was performed using a commercially available kit provided by KAPA Biosystems (KAPA HyperPrep Kit with Library Amplification product KK8504) and IDT's duplex UMI adapters. Unique 8base dual index sequences embedded within the p5 and p7 primers (purchased from IDT) were added during PCR. Enzymatic clean-ups were performed using Beckman Coultier AMPure XP beads with elution volumes reduced to 30μL to maximize library concentration. Library quantification was performed using the Invitrogen Quant-It broad range dsDNA quantification assay kit (Thermo Scientific Catalog: Q33130).
In-solution hybrid selection for exome or targeted panels After library construction, hybridization and capture were performed using the relevant components of IDT's XGen hybridization and wash kit and following the manufacturer's suggested protocol, with several exceptions. A set of 12-plex pre-hybridization pools were created. Custom exome bait (TWIST Biosciences) along with hybridization mastermix was added to the lyophilized pre-hybridization pool prior to resuspension. Library normalization and hybridization setup were performed on a Hamilton Starlet liquid handling platform, while target capture was performed on the Agilent Bravo automated platform. Post capture, a PCR was performed to amplify the capture material. After post-capture enrichment, library pools were quantified using qPCR (automated assay on Agilent Bravo), using a kit purchased from KAPA Biosystems with probes specific to the ends of the adapters. Based on qPCR quantification, pools were normalized using a Hamilton Starlet to 2nM and sequenced using Illumina sequencing technology. The targeted panel bait set used in this study was designed at the Broad Institute to maximize pan-cancer utility and contains regions from 396 driver genes previously annotated in cancer literature.
Cluster amplification and sequencing
Cluster amplification of library pools was performed according to the manufacturer's protocol (Illumina) using the Exclusion Amplification cluster chemistry and HiSeq X flowcells. Flowcells were sequenced on v2 Sequencing-by-Synthesis chemistry for HiSeq X flowcells. The flowcells were then analyzed using RTA v.2.7.3 or later. Each pool of libraries was run on paired 151bp runs, reading the dual-indexed sequences to identify molecular indices and sequenced across the number of lanes needed to meet coverage for all libraries in the pool. For ultra-low-pass whole genome sequencing (ULP-WGS), we sequenced cfDNA to an average genome-wide fold coverage of ∼0.1X.
Tumor fraction, purity, and ploidy assessment of cfDNA For ULP-WGS, we applied ichorCNA [4], a software package which simultaneously predicts regions of CNAs and estimates the fraction of tumor in ULP-WGS. The workflow consists of three steps: first, computation of read coverage over binned 1 MB genomic regions, next, normalization of coverage to known sources of bias, and finally joint inference of the CNA profile and estimation of tumor fraction.
Variant calling and copy number assessment
Somatic SNV and INDEL calling in both WES and TPS were completed on the Terra/Firecloud platform using gatk-Mutect2 pipelines (https://portal.firecloud.org/ ? r e t u r n = t e r r a # m e t h o d s / g e t z l a b / C G A _ W E S _ Characterization_Pipeline_v0.1_Dec2018/2) [29,30]. With exome sequencing, we employed the standard Mutect2 tools, including the orientation-bias filtering model provided in GATK-4.1.6.0. Taking advantage of the serial design of our study, we leveraged Mutect2 Multi-sample mode to borrow information across samples belonging to the same patient, for local haplotype reassembly. Panel sequencing variants were delivered by the Broad Institute who employed tools in GATK-4.1.0.0 with liquid biopsy and duplex-UMI sequencing-specific parameters.
To compare purity and ploidy information from WES to that of ULP-WGS/ichorCNA, we implemented AB-SOLUTE [31] and FACETS [32]. ABSOLUTE was run as described via the CGA WES characterization pipeline, developed by the Getz Lab (see above). For FACETS, which requires a database of common SNP locations, we chose the dbSNP release 138 [32] for hg19 aligned sequencing. Finally, for correlation studies of log-ratio, we employed CNVkit [33], a copy number profiling tool which relies on target level read count binning and circular binary segmentation.
Clonal dynamics and phylogenetic reconstruction
To model the clonal structure and dynamics of metastatic breast cancer, we employed the popular pythonbased tool, PyClone [34], to use hierarchical-Bayes techniques for jointly estimating prevalence of somatic alterations and simultaneously clustering them into groups representing the underlying cancer's cell population structure. PyClone inputs require read count information for somatic alterations, as well as their copy number state and sample purity. For our variant sets, we chose the union of filter-passing alterations from each sampled time point delivered by the commercially available liquid-biopsy targeted panel-sequencing pipeline at the Broad Institute. In addition to this set, we added the filter passing alterations discovered through orthogonal exome sequencing, so long as they intersected the 396gene panel bait-target regions. For copy number information, we intersected our genomic variants with the discrete states determined in ichorCNA profiles at baseline and used the corresponding total_copy_number settings for the preparation of genotype files. ichorCNA also provided sample-level estimates of purity. We chose the PyClone Binomial model, with standard concentration and base measure parameters for the MCMC process. Each patient model was run for 15,000 iterations with the initial 1500 steps thrown out as burn-in. Sequencing error rate for our TPS-based data was set to 0.001, based on earlier estimates from the panel developers. Phylogenetic tree inferences were made using PyClone estimates of the prevalence and the CITUP-QIP algorithm [35], choosing the optimal tree for further investigation of biological context.
Neoantigen prediction
Neoantigen-binding predictions for known MHC molecules were completed using machine learning approaches learned on peptide-affinity data, NetMHCpan 4.0 [36]. We set scoring thresholds at 0.5% for strong binders and 2.0% for weak binders, representing the rank of the prediction against a panel of random natural peptide sequences, as described by NetMHCpan.
Statistical tests and data visualization
Figure plotting and statistical tests were completed in R 3.6.3, with heatmaps generated by the ComplexHeatmap Package [37]. All T tests were performed with unequal variance procedures using the Welch-Satterthwaite approximation for degrees of freedom.
Metastatic TNBC patient and sample selection
Thirty-five patients with metastatic TNBC, who were enrolled on a phase II study of cabozantinib monotherapy (NCT01738438), had available, banked, narrowly sampled, plasma-derived ctDNA samples [27]. Using ultra-low-pass whole genome sequencing (ULP-WGS) at approximately 0.1x coverage, ctDNA tumor fraction (TFx) was computational estimated using the ichorCNA algorithm [4] for each available sample. We identified seven individuals with at least three measurements of ctDNA TFx >0.10 who all had similar baseline TFx values (range 0.22-0.34). The clinical and pathologic characteristics of the selected patients mirrored those of the remaining, excluded study population (Table 1). Among the pertinent characteristics evaluated, we found no significant differences between included and excluded individuals in stage at diagnosis of primary breast cancer (p value: 0.74, chi-squared test), neoadjuvant therapy received (p value 0.24, chi-squared test), and prior lines of metastatic treatment (p value 0.44, chi-squared test), among others. The selected women were between 42 and 69 years old at the time of sample collection, with a median age of 52. Each patient had received neoadjuvant therapy and surgery for localized disease, then had mTNBC confirmed by metastatic biopsy. In total, there were 42 samples on seven patients (4-8 per individual, median = 6; Fig. 1a). Sample collection occurred regularly, every 6-49 days with a median time of 21 days between samples (Fig. 1b, Additional file 1: Table S1). We performed 10,000x unique molecular identifier (UMI)-based targeted panel sequencing (TPS) for each plasma sample with matched germline, and orthogonal 150x whole exome sequencing (WES) for samples with TFx >0.10, along with matched germline (Additional file 1: Table S2). Circulating tumor DNA content fluctuates during treatment Estimates of ctDNA tumor fraction computed from ULP-WGS using the ichorCNA package showed considerable variation during treatment, including crossing below the threshold of 0.10 tumor fraction (Fig. 1c), which has been shown to be associated with overall survival in mTNBC [11]. Tumor fraction ranged from 0.025 to 0.443 with a median value of 0.18. In the first 8 days of cabozantinib treatment, the phase-II cohort displayed a significant reduction in tumor fraction (paired two sample T test, D = −0.056, 95% CI [−0.089, −0.022], p value=0.002) ( Figure S1). We evaluated the magnitude and direction of change from cycle 1, day 1 of treatment (C1D1) to cycle 1, day 8 (C1D8), and its association with best imaging response via RECIST v1.1 [38] and Choi CT criteria [39]. We modeled the relationship via logistic regression and found no significant relationship between initial tumor fraction change and RECIST/Choi measured outcomes (p value = 0.59 and 0.69 for RECIST and Choi criteria, respectively; Additional file 1: Figure S1 B-C). Within the seven-patient cohort, we found similar discordance between tumor fraction dynamics and imaging response: all patients had stable disease as best RECISTv1.1 imaging response despite significant variation in TFx with some patients' TFx rising and others demonstrating significant decline (Fig. 1d).
Orthogonal ctDNA sequencing approaches are highly concordant Published reports vary in the concordance of ctDNA single-nucleotide variant (SNV) detection across orthogonal sequencing approaches-from very high concordance [4] to relatively poor concordance, even across commercial platforms [40]. To address this, we assessed whether ctDNA TPS can recapitulate variants detected via WES. In our selected cohort, we identified 45 somatic alterations which were called by Mutect2 in one or more WES experiments and also intersected the genomic intervals captured in the targeted panel sequencing. Using this set of observed alterations, we searched for support in our targeted panel sequencing, in order to measure agreement in the two sequencing modalities. In general, we found that the recall of WES events in TPS was very high, sensitivity = 0.955 and reliable across time ( Fig. 2a). In our variant set, TPS detected more somatic calls than WES especially at low variant allele frequencies (VAF), not unexpected due to the higher achievable sequencing depth in combination with the UMI-based read processing protocol, which reduces false positive results. For each site in the test set, we compared VAF between WES and TPS and found that VAF measurements were highly concordant (Pearson's r=0.949) (Fig. 2a, b).
Collectively, these data demonstrate that orthogonal TPS and WES sequencing approaches demonstrate robust concordance in both SNV detection and VAF among shared loci. Accurate measures of purity and ploidy are crucial to modeling tumor evolution and subclonal structure. To evaluate estimation methods for purity and ploidy, we employed three popular, open-source, orthogonal methods designed for either WES or ULP-WGS data and compared across time points with high tumor fraction (TFx > 10%). We ran ABSOLUTE [31] and FACE TS [32] on WES data and compared the results to estimates provided by ichorCNA [4] run on ULP-WGS data. We found that purity measurements were generally robust to differences in algorithm and sequencing modality (Fig. 2c). However, ploidy solutions were less stable (Fig. 2d), even across samples drawn over a short timeframe from the same patient, among which one would not anticipate a significant shift in ploidy. Overall, ichorCNA provided the most stable ploidy profile, with similar purity estimates to ABSOLUTE/FACETS. For subsequent modeling of clonal structure, we used ichorCNA purity and ploidy solutions.
Copy number profiles are stable during treatment for metastatic breast cancer ULP-WGS of ctDNA provides high-quality copy number information at TFx>0.10, making it feasible to follow somatic copy number alterations (SCNAs) over the course of treatment in the metastatic setting. Using ichorCNA copy number profiles, we examined longitudinal changes in log ratio and copy number state. Reductions in TFx corresponded with lower resolution copy number profiles, as evident in the case vignette of participant RP-466 (Fig. 3a). At the lowest levels of TFx, global trends in somatic copy number alterations (SCNAs) were maintained, but focal and sub-arm level (See figure on previous page.) Fig. 1 Study design and sampling dynamics. a Schematic diagram of the analysis workflow from patient selection, sample capture, and sequencing to downstream analyses. We leveraged the Terra Genomics/FireCloud platform for data storage and high-performance computing tasks. b Schematic representation of sampling density for each of the seven cohort members on study, also specifying whether whole exome sequencing and/or targeted panel sequencing was performed on that sample. All samples received ultra-low-pass whole genome sequencing. c Tumor fraction dynamics colored by individual. Tumor fraction was measured on study using ultra-low-pass whole genome sequencing and the ichorCNA algorithm. d Tumor fraction dynamics recolored by RECIST v1.1 response by imaging categories. RECIST v1.1 bucket response type into several categories: complete response (CR), partial response (PR), stable disease (SD), and progressive disease (PD) chromosomal events, like those at 1p, 4q, 10p, and 12q loci, were lost. These trends were largely mirrored in the profiles of the other cohort patients (Additional file 1: Figure S2). To understand the SCNA dynamics, we assessed SCNA stability between the first and last sequencing time points for each patient in the cohort. We randomly and uniformly sampled genomic positions, querying their states at the first and last time points, and constructed a confusion matrix of possible copy number states (Fig. 3b, c). Overall, we found stable genome structure from first to last sampled time point, with SCNA calls collapsed into amplification, neutral, and deletion states (balanced accuracy = 0.858, sensitivity = 0.815, specificity = 0.900). Similarly, comparison of the discrete copy number at the first and last time point per patient also yielded high accuracy, sensitivity, and specificity (balanced accuracy = 0.830, sensitivity = 0.716, specificity = 0.945), implying stability of the more specific, called states over time.
To test the coherence of copy number information provided by ctDNA WES and TPS, we compared the log ratios computed from ULP-WGS and the corresponding measurement from either WES or TPS (target reads only). WES displayed high concordance to ULP-WGS estimates of log ratio (Pearson's r = 0.948), but TPS displayed very little relationship to ULP-WGS (Person's r = 0.148) (Fig. 3b, c). In terms of the collapsed copy states (i.e., amplification, neutral, and deletion), WES predicted ULP-WGS states at rates better than chance (balanced accuracy = 0.746, sensitivity = 0.663, specificity = 0.830). TPS predicted these same states no better than random chance (balanced accuracy = 0.523, sensitivity = 0.364, specificity = 0.682). It may be the case that the on-target/off-target binned read count strategy used in the WES/TPS copy number analyses may be improved through the incorporation of allelic imbalance information at common SNP loci if targeted panel bait sets are appropriately designed.
Modeling clonal architecture over narrow time frames via ctDNA
As ctDNA offers the potential for high density, minimally invasive sample collection, we explored its ability to model the clonal structure of metastatic disease progression. Combining somatic variants found by deep TPS, as well as total copy number information, purity, and ploidy from ULP-WGS, we modeled the tumor subclonal structure using the PyClone software package [34]. PyClone uses a hierarchical Bayesian approach, allowing for joint estimation across variants and time points [34]. PyClone assigns variants into clusters, representing underlying cellular populations or clones, and estimates corresponding adjusted cellular prevalence for each clone within the tumor proportion. Using a combinatorial approach which interfaces easily with PyClone output profiles [35], we built phylogenetic trees and labeled the detected non-synonymous, somatic alterations.
In our cohort, the structure and dynamics of subclonal populations varied considerably. Profiles of three patients, RP-466, RP-527, and RP-557, illustrate the observed trends among the patients (Fig. 4). RP-466 clone populations were characterized, generally by stability across the 147-day sampling window (Fig. 4a).
We noted that RP-466's profile appears to fluctuate at the fourth and sixth time points, in discordance with the rest of the profile, potentially revealing overestimation of sample purity at those time points. As an added sensitivity analysis (Additional file 1: Figure S3), we re-ran the PyClone model under the same conditions removing (1) the sample corresponding to the fourth time point (Additional file 1: Figure S3A) and (2) the samples corresponding to the fourth, fifth, and sixth time points (Additional file 1: Figure S3B). The removal of these samples has little impact on the general trends of the clonal structure, with both plots display the same hallmark stability, as the original analysis. We note the only identifiable change occurs in the second of the two sensitivity analyses, where low-level residual clusters inconsistently split into an additional group. Since our principal intent is to assess the feasibility of this modeling approach to real-time monitoring, we feet the removal of any of these samples may tend to misrepresent the anticipated utility of this approach; thus, we decided to keep the original data intact for subsequent analyses (Fig. 4a). The phylogenetic tree Fig. 3. Copy number profiles are stable. Ultra-low pass whole genome sequencing (ULP-WGS) was performed on all 42 ctDNA samples and tumor fraction and copy number data derived using ichorCNA. a Genome-wide copy profile of patient RP-466, derived from ULP-WGS on liquid biopsy ctDNA, showing changes in focal event resolution resulting from shifts in tumor fraction. Dark green segments represent a copy number of 1; blue represent neutral or 2 copies, brown and red represent 3 and 4+, respectively. b Scatter plot of computed log-ratios in ULP-WGS, compared to those derived from WES or TPS data using binned read-count of on and off target bins. c Discrete copy number confusion matrix for ULP-WGS based calls at first and last time points. All samples had tumor fraction ≥10%. Genomic positions assayed between first and last time points were uniformly and randomly sampled, and discrete copy number states were capped between one and seven during initial ichorCNA analyses In contrast, RP-527 and RP-557 reflect shifts in clonal dynamics over narrow time windows. In RP-527, cluster four significantly expanded from background prevalence levels (D147-D0 = 0.461, n = 2, p value = 0.04552, Welch two-sample T test) and persists for at least 49 days (Fig. 4c). This expanding cluster contained two missense variants of consequence, a K/N substitution in the receptor tyrosine kinase DDR2 as well as a splice-site variant in the tumor suppressor RNF43 (Fig. 4d). DDR2 is a known target of cabozantinib, shown to be inhibited through kinome analysis of cabozantinib clinical trial specimens via quantitative kinome analysis [41]. On the other hand, RP-557 demonstrates a subclone (cluster 1) that drops in prevalence (D64-D0 = −0.205, n = 2, p value = 0.1732, Welch two-sample T test), over the 64-day period. This drop appears to co-occur with the dynamics in the dominant clones represented by clusters 3 and 4 which dip and then rise in the final sampled time point (Fig. 4e). This cluster is characterized by decreased prevalence of a missense mutation encoding a single H/L substitution in exon 11 of the tumor suppressor RB1 (Fig. 4f). Clonal dynamics for the remaining patients are visualized in (Additional file 1: Figure S4) as are all variants and clonal abundances for all samples (Additional file 1: Figure S5), with variants annotated in (Additional file 2: Table S3), both of which were completed using the same TPS/ULP-WGS modeling strategy.
Whole exome sequencing uncovers driver mutations and allows neoantigen discovery
To examine the longitudinal consistency of driver gene variant calling in ctDNA WES data, we looked at known driver mutations previously outlined in the breast cancer literature [42,43] as well as those found by the TCGA Pan Cancer Atlas studies and OncoKB [44,45]. Our data indicate that WES of ctDNA samples recovers driver variants consistently over multiple time points (Fig. 5a). For example, the most frequently altered genes were TP53 and PIK3CA, detected at every time point in seven and three cohort members, respectively. Among pan cancer drivers, EP400 was detected in three individuals, and both synonymous and non-synonymous alterations in the genes AMER1 and PTPRB were detected in two cohort members. The low dropout rates of variants over up to seven consecutive exomes at moderate read depth indicate that detection of driver mutations overtime with ctDNA is feasible.
Whole exome sequencing of ctDNA allows computational prediction of neoantigens. To this end, we leveraged NetMHCpan 4.0 [36], a published tool for neoantigen prediction from mutational data. Among the patients in our cohort, we detected between 36 and 195 novel alterations (median = 96) predicted to produce either strongly or weakly binding neoantigens (Fig. 5b). These sites account for 182-1007 unique peptide presentations per individual (median = 445). In general, we found the number of novel, neoantigen-producing alterations have a strong and positive correlation with the total mutation burden (Pearson's R = 0.992). Weakly binding neoantigens were predicted more often than strongly binding neoantigens, with an average detection ratio of 2.9:1.
To take advantage of the serial nature of our study, we looked at neoantigen dynamics over time. Representative trends are illustrated by the individual profiles of RP-527 (Fig. 5c) and RP-535 (Fig. 5d). Predicted neoantigen dynamics for the other patients are visualized (Additional file 1: Figure S6). Notably, the majority of strongly and weakly binding neoantigen producing alleles are detectable at all time points (RP-527 = 475/851, RP-535 = 530/893 omnipresent neoantigens), despite fluctuations in tumor fraction and clinical response. Despite this trend, not all neoantigens are present at baseline. In RP-527 and RP-535, we found 4.1% and 20.8% of variants resulting in neoantigens were totally absent in day zero (See figure on previous page.) Fig. 5 Whole exome sequencing uncovers driver mutations and allows neoantigen prediction. Whole exome sequencing results from 31 total samples with tumor fraction ≥10% using short variant and INDEL calling tools from gatk-Mutect2 pipelines (McKenna et al., 2010), with subsequent neoantigen binding predictions for known MHC molecules from NetMHCpan 4.0 (Reynisson et al., 2020). a Driver mutations found via whole exome sequencing across time points. Variant data visualized are those whose genes have been previously annotated in literature as breast cancer drivers or pan cancer drivers. b Trends in predicted neoantigens among cohort members. Strong binders are denoted as those peptide sequences with NetMHCpan ranks <0.5%, and weak binders are those with ranks <2%. Neoantigen Generating sSNV are alterations whose changes to peptide structure are predicted to produce neoantigens capable of strong or weak binding to known MHC molecules. c, d Neoantigen dynamics from patient RP-527 and RP-535, showing proportions of detected neoantigens and dropout over time. Strong, weak, and ND labels correspond to binding affinity of predicted neoantigens, as well as a non-detected category to capture dropout. Threads are colored by their state at the final sequencing time point sequencing. In RP-527, we find neoantigens which appear at the initial and final time points but disappear intermittently below detectable levels in mid-series sequencing events. In both profiles, we also find neoantigen alleles which dropout without re-detection, possibly indicating loss of specific cell populations harboring these variants. In addition, we find potentially clinically important patterns of dynamics in specific neoantigens which only present later in the course of therapy. In RP-535, for instance, we find specific neoantigens which present in the sixth and seventh exome sequencing assays, suggesting the development or expansion of variants resulting in novel predicted neoantigens.
Discussion
While tumor biopsies remain the gold standard for diagnosis, ctDNA-based "liquid biopsies" overcome many limitations of tumor biopsies: metastases may be inaccessible or not feasible to biopsy serially over time [46,47]; biopsies sample a localized region of a single metastatic site, which may introduce sampling bias [48]; biopsies may be painful and cause anxiety; and biopsies have a risk of bleeding or infection [46]. Minimally invasive ctDNA assays from simple blood draws offer the potential to serially analyze tumor genomic features through a more patient-centric approach. To date, our understanding of the opportunities and limitations of frequent ctDNA analyses over days to weeks via orthogonal sequencing approaches is limited. In this study, we sought to understand (1) tumor genomic changes (SNVs, SCNAs, predicted neoantigens) detectable over narrow time windows and (2) the performance, utility, and limitations of orthogonal sequencing approaches and algorithms on serial ctDNA samples.
This study provides an important assessment of the tumor genomic features that change, or remain stable, over narrow time windows. Overall, copy number was stable across the seven patients in this cohort. This may reflect that large-scale SCNA events occur early in TNBC development and subsequent alterations are infrequent [49,50], but should be evaluated in other tumor types and settings (e.g., DNA damaging chemotherapy). Alternatively, this may reflect challenges and limitations of SCNA characterization via ctDNA (discussed further below).
Conversely, we detected shifts in SNVs both via TPS and WES approaches. To track within-patient clonal dynamics, we evaluated a combined ULP-WGS + TPS approach to obtain purity, ploidy, copy number, and variant data, using PyClone for clonal reconstruction. In general, performance of PyClone and subsequent phylogenetic reconstruction appear to depend heavily on the number of variants recovered, and the number of samples taken. Our profiles with the best resolution had a high number of variants and many time points. Deeper sequencing may increase the number of trackable variants recovered from ctDNA, as well as lower the error in modeling prevalence. Additionally, joint modeling across sampling events is beneficial for studying response or resistance to treatments over time. Finally, there were many low prevalence variants (VAF< 20%), which were inconsistently recovered across time. Further resolution of these low-prevalence variants may be possible with deeper sequencing, or deep sequencing of paired WBC, depending on whether these markers represent members of the tumor cell phylogeny or contaminating artifacts of clonal hematopoiesis. Recent advances in personalized ctDNA-based assays, in which a patients' tumor is sequenced and validated mutation-specific primers are developed, may allow for higher sensitivity detection of known variants in ctDNA than the method used in this study [20,22,51]. However, these personalized mutation panels fail to capture the development of new alterations over time, limiting their utility to largely retrospective analyses.
An intriguing finding regarding predicted neoantigens from this study was the detection of either newly developed or clonally expanding alterations that result in predicted neoantigens over relatively short time frames. This is the first study, to our knowledge, to specifically track shifts in predicted neoantigens via ctDNA within individual patients. Knowledge of sustained and emerging neoantigen peptides has potential implications for immunotherapy, including neoantigen vaccine development and selecting patients or optimizing tumors to respond to checkpoint inhibitors.
This study is unique in analyzing serial ctDNA samples via multiple ctDNA sequencing modalities (ULP-WGS, TPS, WES). A major hurdle to clinical implementation of ctDNA sequencing is inconsistency across platforms [40,52]. When evaluating specific alterations that demonstrated adequate sequencing quality and coverage on both TPS and WES, we found very high recall. This reinforces the importance of clinical ctDNA sequencing assays reporting quality metrics. While reliable total copy number information can be inferred in most ctDNA samples via ULP-WGS, allele-specific SCNA resolution, especially for exome-based or panel-based assays, remains a challenge. Other future areas of investigation include determinants of ctDNA "shedders" versus "non-shedders" and the best use of very low TFx samples. Additionally, we believe that investigating approaches specific to copy number analysis on liquid biopsy exome and panel sequencing would allow for more precise and affordable genetic monitoring in metastatic cancers. Further work will also explore pre-analytical and analytical factors impacting ctDNA results, particularly for rare variants detected at low allele fractions.
This study does have limitations. Given the multi-sample, orthogonal sequencing analysis approach, we focused on a small number of patients with a single cancer sub-type who all received the same therapy on clinical trial without a clinical response. Thus, while we make some fundamental observations on technical aspects, generalization will require larger studies in other tumor types and clinically interesting settings. Also, these samples were collected using EDTA tubes rather than the common preservative-based tubes commonly used for ctDNA studies now: however, processing conformed to ASCO/CAP guidelines [53].
Conclusions
In this work, we demonstrate that analysis of multiple ctDNA samples collected from patients over narrow windows of time is not only feasible, but provides potentially important insights into clonal and neoantigen dynamics. Our approach reveals strengths and limitations of existing ctDNA sequencing and analytical approaches. In the future, we anticipate the expansion of ctDNA applications in clinical use, including serial genetic monitoring of tumor dynamics in metastatic patients, neoantigen prediction for immunogenic therapies, and real-time modeling of prognoses. Our hope is that low cost, minimally invasive genetic monitoring, made possible through ctDNA profiling, expands the toolkit of physicians and patients in metastatic cancers of all types-allowing more responsive approaches to the management of metastatic treatment and facilitating novel methodologies in translational research.
Additional file 1. Supplementary Figures and
Additional file 2. Supplementary Table S3.
Availability of data and materials
All sequencing data supporting the conclusions of this paper are deposited to dbGAP, dbGaP Accession Number: phs001417.v2.p1. While awaiting data release via dbGaP, investigators may contact the corresponding author to discuss gaining access to the data. Data and code from downstream analyses are available through a Gitlab repository (https://gitlab.com/Zt_ Weber/narrow-interval-clonal-structure-mbc.git) [54].
Declarations
Ethics approval and consent to participate All patients in this study provided written consent for participation under a protocol approved by local human research protections programs and institutional review boards at Dana-Farber Cancer Institute (DFCI#12-431) and Ohio State University (OSU#2018C0211). This study was conducted in accordance with the principles described in the Declaration of Helsinki.
Consent for publication
Not applicable.
|
v3-fos-license
|
2023-02-25T14:11:09.248Z
|
2021-01-01T00:00:00.000
|
257162169
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s42452-020-04046-6.pdf",
"pdf_hash": "bb04a065a401f83dce1d83f97f6aee27a49ee2be",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46473",
"s2fieldsofstudy": [
"Materials Science",
"Agricultural And Food Sciences"
],
"sha1": "bb04a065a401f83dce1d83f97f6aee27a49ee2be",
"year": 2021
}
|
pes2o/s2orc
|
Rapid green synthesis of non-cytotoxic silver nanoparticles using aqueous extracts of 'Golden Delicious' apple pulp and cumin seeds with antibacterial and antioxidant activity
A simple, facile and rapid microwave irradiated system was applied to synthesize silver nanoparticles using 'Golden Delicious' apple pulp (Malus domestica) and cumin (Cuminum cyminum) seed extracts. The phytosynthesized AgNPs were characterized by Ultraviolet–Visible Spectroscopy (UV–vis), Fourier transform infrared (FTIR), X-ray Diffraction (XRD) Transmission Electron Microscopy (TEM) and Zeta sizer analysis. In the study, the presence of face-centered cubic crystalline structured metallic silver in AgNPs from apple and cumin extracts and the monodisperse nature of AgNPs with the size distribution range of 5.46–20 nm and 1.84–20.57 nm were confirmed, respectively. This study established an efficient green synthesis approach that created so far, the smallest silver nanoparticles by using these two extracts. According to the results obtained, AgNPs synthesized using both extracts were non-toxic against L929 mouse fibroblast cells, while they were effective against both Gram-positive (Staphylococcus aureus) and Gram-negative (Escherichia coli) bacteria with a greater effect on S. aureus. Moreover, AgNPs synthesized through cumin extract exhibited a higher ABTS scavenging ability (96.43 ± 0.78% at 160 μg/mL) in comparison to apple pulp extract mediated AgNPs, while both AgNPs showed lower activity for DPPH (27.84 ± 0.56% and 13.12 ± 0.32% from cumin seed and apple pulp extracts, respectively). In summary, our results suggest the green non-cytotoxic AgNPs synthesized in this study could be a promising template for further biological and clinical applications.
Introduction
Nanotechnology consists of many fields such as physics, chemistry, pharmacy, biology, materials science and is a rapidly developing multidisciplinary field of science which has become a general purpose technology that benefits society [1]. In the past few decades, silver nanoparticles have attracted tremendous attention due to their excellent anti-pathogenic mechanism, thanks to their unique and characteristic physical, chemical and biological properties [2]. Antimicrobial prophylaxis of AgNPs widens their application in many aspects of medical science, i.e., sterilization of medical devices, drug delivery system, oral health protection, and wound treatment [3]. Simultaneously, nano-silver are also being utilized in different other fields, including, water treatment, cosmetics, textiles, biomedicine, DNA sequencing, food sanitation and packaging, sensing, biosensing, surface-enhanced Raman scattering (SERS), optoelectronics, and electronics [4][5][6][7][8]. Therefore, a range of techniques have been adopted for synthesizing silver and other metallic nanoparticles, including chemical, properties, and wide range applicability [17,18]. However, the multidisciplinary applications of silver nanoparticles demand their rapid and mass production, and scientists are trying to design faster, well-established and more inexpensive approaches for the fabrication of AgNPs on a large scale. Aiming this, plant mediated synthesis with microwave irradiation could be the fast and facile option for nanoparticle production. Microwave irradiation provides a fast and homogeneous heating system which confirms consistent nucleation and growth of nanoparticles in the reaction medium [19]. Besides, compared to the conventional heating, electromagnetic radiations in the microwave can decrease the reaction time by a factor of ~ 20 without disturbing the reaction condition [20,21]. During the synthesis, the growth and the capping of a particle are antagonistic against each other, and the binding affinity of the capping agent greatly influence the final sizes, shapes and dispersity of NPs [22]. Previous studies indicated that higher and uniform heating of a microwave system accelerate the reaction kinetics in the synthesis medium, which increase the rate of capping; and thereby, produce nanoparticles with smaller size distribution [23].
Considering all these above mentioned facts and reasons, this study was designed to establish a fast and facile microwave accelerated (with two optimized parameters, i.e., time and temperature) green synthesis of silver nanoparticles using golden delicious apple (Malus domestica 'Golden Delicious') pulp and cumin (Cuminum cyminum) seed extracts without involving any supplementary chemicals. The reasons for choosing these two plant-based materials were because of their availability and being potential sources of different phytochemicals, which might be very effective reducer and stabilizer during the synthesis process. Apple fruits are rich in water soluble hydrocarbons, proteins, tartaric acid, polyphenolics, flavonoids, phytonutrients and antioxidant [24]. On the other hand, cumin seeds are popular as spice and herbal medicine. The presence of different essential volatile oil (5%) in cumin seed are the reason for their distinctive flavor, warm, and strong aroma. Some important essential oil components available in cumin seed are cymene, cuminaldehyde, and different terpenoids [25].
After the completion of synthesis, identification and characterization were completed using different analytical methods i.e., Ultraviolet-Visible Spectroscopy (UV-vis), Fourier transform infrared (FTIR), X-ray Diffraction (XRD) Transmission Electron Microscopy (TEM) and Zeta sizer analysis. Moreover, antimicrobial potentiality, cytotoxicity and antioxidant activity of fabricated AgNPs were examined to determine their suitability for wider range of applications.
Materials
All chemicals used in the study were of analytical grade and were used to conduct all experiments without further modification or purification. Silver nitrate (AgNO 3 ) and other chemicals were obtained from Sigma-Aldrich (St. Louis, MO, USA).Golden delicious apples and dried cumin seeds were purchased from local grocery store. Ultrapurified water from the water purification system (Purelab flex, Veolia Water Solutions and Technologies, Tienen, Belgium) was used for all solutions of reacting materials, and other purposes. All the glass containers were washed using ultra-purified water and dried appropriately before use. Properly autoclaved instruments were used for antibacterial, antioxidant and cytotoxicity studies.
Preparation of apple (Malus domestica 'Golden Delicious') pulp extract
Fresh apple fruits were washed separately with running tap water to eliminate the unwanted dust particles and then, thoroughly washed several times with ultra-purified water. Using a sterilized kitchen paring knife, the fruits were peeled off and 100 g of its seedless pulp was sliced into small pieces. Then, these pieces were put into a food grade kitchen blender, ground well to make pulp paste. After adding equal volume of ultra-purified water, the paste was transferred into a conical flask, mixed well, and seated into a microwave (laboratory-grade) for 3 min. with a maximum power level of 700 W for irradiation to extract the biomolecules present in apple pulp. After cooling down at room temperature, the pulp suspension was centrifuged at 5000 rpm for 15 min. Finally, the collected paleyellow colored supernatant was filtered using Whatman No. 1 filter paper to eliminate the impurities and stored in the freezer at 4 °C for further experiments.
Preparation of cumin (Cuminum cyminum) seed extract
Dried cumin seeds were crushed into fine powder. About 10 g of this powder was added into 100 mL of ultra-purified water and placed into an ultrasonic bath at 70 °C for 20 min. Then, the solution was put at room temperature for cooling down, and centrifuged at 5000 rpm for 15 min. After the centrifuge, a visible yellowish-brown colored supernatant was collected and filtered well using Whatman No. 1 filter paper to remove the stringy discarded particles. Lastly, the final cumin seed extract was stored at 4 °C for further usages.
Synthesis of silver nanoparticles
The fabrications of silver nanoparticles were conducted separately for each plant extract. For optimizing the synthesis protocol, different synthesis cycles were designed according to the variation of the ratio of plant extract and salt solution as well as temperature at different wavelengths and time durations. Successful synthesis were accomplished when silver nitrate salt (0.017 g AgNO 3 ; 1 mM) was integrated with 90 mL of ultra-purified deionized water and 10 mL of each plant extract. With magnetic stirring bars, the solutions were transferred into the microwave (laboratory-grade) at 90 °C for 15 min with a highest heating level of 300 W. After finishing the microwave irradiations, the color changes in the synthesis media primarily indicated the completion of the fabrication cycle, and the production of AgNPs.
Purification of fabricated nanoparticles
After the synthesis was successfully completed, the silver nanoparticles produced from both plant extracts were filtered using 2.5 µm pore sized Whatman No.5 filter paper to remove large discarded particles, and the remaining solutions were centrifuged at 5000 rpm at 4 °C for 15 min. The precipitated solid forms in this process were washed several times with ultra-purified H 2 O to eliminate any undesired plant extract remaining. Finally, under vacuum conditions, AgNPs without plant debris were placed in a laboratory-class dryer to collect dust-free NPs. At the end of all procedures, the nanoparticles were transferred to dark colored bottles and stored in the refrigerator (4 °C) for further studies.
Characterization of silver nanoparticles
The optical properties of synthesized particles were screened using UV-vis spectrophotometer (Shimadzu UV-1700) to monitor the bioreduction of Ag + ions and confirm the formation of nano-silver from silver ions over a range of 200-800 nm. IR spectroscopic measurements were conducted using Shimadzu IR Prestige-21 FTIR-ATR instrument. For evaluating the crystallinity of the NPs, the phytosynthesized silver nanoparticles were examined through an X-ray diffraction scheme (PANalytical Empyrean model, UK) where the XRD patterns were calculated over the range of 2 h from 10° to 90° with a step size of 0.02. The Origin 8.5 software (Origin Lab Corporation, Northampton, MA, USA) was used to regenerate the XRD graphs. The morphology of silver nanoparticles was revealed by using Transmission Electron Microscopy (TEM 1400, JEOL, Tokyo, Japan) set at increased speed voltage of 120 kV. In this case, the samples were prepared by taking small amount of nanoparticle suspensions which were placed drop by drop on the copper grids, and then kept for drying at room temperature, and used for TEM imaging. Besides, measurements of zeta potential and size distribution for AgNPs were performed via particle/zeta analyzer (Zetasizer nano ZS, Malvern Instruments Ltd., UK).
Antibacterial activities of biosynthesized silver nanoparticles
The antibacterial potentials of phytosynthesized AgNPs were studied by agar well diffusion assay for both Grampositive (Staphylococcus aureus) and Gram-negative The bacterial suspensions were adjusted using 0.5 McFarland turbidity (1.5 × 10 8 CFU/mL). Using gel puncture, several wells (around 7 mm in diameter) were created on Muller-Hinton Agar (MHA, Merck), and then, 100 µL bacterial inocula were spread onto these agar plates. Afterwards, these plates were kept for air-drying at room temperature. Exactly 5 µg of each biosynthesized powdered NPs was added into 5 mL of ultra-purified H 2 O which were applied as the working suspensions. Then, 50 µL of aliquot parts from the suspensions were poured into every single well of the medium and then incubated at 35 °C for 24 h. Subsequently, after the incubation, clear and visible regions around the wells indicated the zones of inhibition by nanoparticles were calculated in diameters (mm). The antibacterial potential of silver nanoparticles against these bacterial strains were compared using the standard antibiotic discs of gentamicin (Oxoid, 10 µg/sensidisc). The experiments were repeated in triplicate.
Cytotoxicity of phytosynthesized silver nanoparticles
In vitro cytotoxicity of phytosynthesized silver nanoparticles was tested by evaluating cell viability on the L929 mouse fibroblast cell line using the XTT assay. DMEM-F12 medium was used to maintain L929 cell line culture supplemented with penicillium-streptomycin and 10% fetal bovine serum. The cells containing media were incubated at 37 °C with 5% CO 2 . The cells enriched using trypsin were separated from the vessels, followed by counting viable cells stained with Trypan blue. The density of obtained viable cells was adjusted to 10 6 live cells in 1 mL medium and followed by seeding of plating 100 µL of cell suspension in every well of sterile 96-well flat bottom microplate (BD, Biosciences). Silver nanoparticles of varying concentrations (0, 0.1, 0.25, 0.5, 1, 2.5 and 5 µg/mL) were added to the cultured cells and incubated at 37 °C for 24 h. Subsequently, the old medium was replaced with fresh medium (100 mL) containing 100 µL XTT (2, 3-Bis-(2-Methoxy-4-Nitro-5-Sulfophenyl)-2H-Tetrazolium-5-Carboxanilide) solution in DMEM (0.5 mg/mL concentration with 7.5 μg/mL Phenazine methosulfate). Then plates containing medium suspensions were incubated at 37 °C for 4 h. Finally, optical densities of live cell suspensions at 450 nm were measured using a multi-plate reader (Lab-Line Instruments, Melrose Park, IL, USA).
Antioxidant activity of biosynthesized silver nanoparticles
The free radical quenching property of nanoparticles was measured by using stable free radical chemical 2, 2-diphenyl-1-picrylhydrazyl (DPPH) in accordance with Phull [26] and using stable chemical 2, 2′-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) (ABTS) in accordance with Arnao [27]. Briefly, different concentration of nanoparticles was mixed with 2 mL of methanolic DPPH solution (40 mg/mL) and 1 mL of 50 mM tris HCl. The reaction mixture was incubated at room temperature in dark for 30 min and absorbance was recorded at 517 nm. Also, ABTS stock solution was prepared by mixing 7 mM ABTS and 2.45 mM potassium per sulphite in methanol and incubated in a The percentage of free radical quenching property was calculated as follows:
Results and discussion
Silver nanoparticles were fabricated from silver nitrate (AgNO 3 ) salt using Malus domestica pulp and Cuminum cyminum seed extracts. Formation and fabrication of AgNPs were followed by an immediate color change of the reaction medium after a certain time of period (Fig. 1).
UV-Vis spectrographic analysis
The absorption maxima of biosynthesized silver nanoparticles from apple pulp extract was observed at 440 nm whereas form cumin seed extract was found at 439 nm (Fig. 2). Metallic nanoparticles show UV-Vis spectrograph peaks in specific range of electromagnetic wave due to their surface plasmon resonance (SPR). It has observed that silver nanoparticles provide the characteristic sharp peak in the range of 400-475 nm [28]. SPR is the expression of a resonance effect that causes as a result of the interaction of free and highly mobile electrons of metallic nanoparticles with incident photons of the visible light during the UV-vis spectroscopy [29]. The interaction depends on the size and shape of the NPs, which shifts to a longer wavelength as the particle size increases [30].
(1) % of free radical quenchin = [(Absorbance of control − Absorbance of sample) Absorbance of control × 100 % Additionally, morphological features of NPs and their dispersity in the suspension could also be monitored by this spectroscopic analysis [31]. Characteristic single sharp peak for both the samples indicated the presence of monodispersed, smaller sized silver nanoparticles [32], which was confirmed by TEM imaging and Zeta analysis.
Fourier transforms infrared (FTIR) analysis
The Fourier transforms infrared (FTIR) spectrum of phytosynthesized silver nanoparticles from apple pulp extract (Fig. 3a) provided the band at 3381.21 cm −1 corresponds to aliphatic primary amine stretching (N-H). The band at 1641.42 cm −1 is responsible for strong alkene monosubstituted (C = C) bond. A strong C-O stretching primary alcohol bond was found at the peak of 1055.06 cm −1 . The IR band at 972.12 cm −1 represents a strong alkene disubstituted (trans-) bond whereas the stretch of medium alkene (C = C) trisubstituted was found at 794.67 cm −1 .
On the other hand, Fig. 3b represents IR-spectrum of biosynthesized silver nanoparticles from cumin seed extract. The broad peak was observed at 3373.50 cm −1 represents the meadium aliphatic primary amine (N-H) stretch. The band at 1639.49 cm −1 indicates a strong alkene monosubstituted (C = C) stretching. The absorption peak at 1415.75 cm −1 could be identified as the -OH stretching of H 2 O or ethanol present in the sample. The peak at 1058.92 cm −1 is due to the strong C-O stretching of primary alcohol vibration. The spectrum at 972.12 cm −1 represents a strong alkene disubstituted (trans) bond whereas the peak at 794.67 cm −1 is owing to the stretch Fig. 3 IR-spectroscopic graph of biosynthesized silver nanoparticles obtained from a apple pulp extract; bcumin seed extract of medium trisubstituted alkene (C = C) stretching and finally, 655 cm −1 is for strong C-Br stretching (halo compound). FTIR spectra therefore suggested that some amino acid residues, proteins, reducing sugars, polyphenols, flavanones, and terpenoids available in plant extracts played the vital rules in the reduction of silver ions into AgNPs and interacted with phytosynthesized silver nanoparticles to stabilize these particles [33,34].
X-ray diffraction
X-ray diffraction (XRD) studies were utilized to exhibit the crystalline structure of green synthesized silver nanoparticles. The XRD spectrum of biosynthesized AgNPs by fresh Malus domestica pulp extract is illustrated in Fig. 4a and (311), respectively. The outcomes from XRD studies signify that both AgNPs specimens are consist of face-centered cubic crystalline structured metallic silver, which correspond coordinate the catalog of the JCPDS (Joint Committee on Powder Diffraction Standards) file no: 04-0783 [35].
Transmission electron microscopy (TEM)
Morphological structure and size distribution of reduced phytosynthesized silver nanoparticles were analyzed by TEM. TEM profile of biosynthesized AgNPs by fresh Malus domestica pulp extract showed that the nanoparticles at 100 nm scales are morphologically spherical or globular in shape with the distribution range of 5.46-20 nm in diameter (Fig. 5a). The TEM micrograph of the biosynthesized AgNPs using Cuminum cyminum seeds extract at 50 nm scales was revealed in Fig. 5b.The result confirmed that the nanoparticles are almost globular in shape with maximum particles in the size ranged from 1.84 to 20. 57 nm. Moreover, Both the synthesized nanoparticles are distributed uniformly i.e. monodisperse nanoparticles.
Particle size distribution and zeta potential measurement
Characterization of nanoparticles using particle size distribution and zeta potential measurement reveals information regarding the size distribution, surface charge, colloidal behavior and stability of NPs [36]. The zetasizer analysis of biosynthesized silver nanoparticles from apple pulp extract revealed that the average value of nanoparticle size distribution was 20.70 nm whereas the average zeta potential value was found as − 25.80 mV as shown in Fig. 6. Besides, the result of AgNPs obtained from cumin seed extract indicated the average particle size as 14.30 nm with the zeta potential value of − 27.8 mV (Fig. 7). However, the average particle size distribution of both nanoparticles also supports the results of TEM analysis by providing the average size values closer to the size distribution ranges Fig. 6 a Size distribution and b Z-potential analysis of phytosynthesized silver nanoparticles from apple pulp extract Fig. 7 a Size distribution and b Z-potential analysis of phytosynthesized silver nanoparticles from cumin seed extract of TEM profiles. Overall, the size distribution by zeta sizer indicated the absence of aggregation. Moreover, the negative zeta potential values suggested the presence of the possible capping and stabilization of NPs by the biomaterials available in the plant extracts as well as present of strong agglomeration by retaining the particles separate from each other, which enhanced the negative negative repulsion among the particles and consequently, confirmed higher stability [36]. Without any microwave irradiation, several previous studies have established different protocols for synthesizing AgNPs by food extracts as reducing and stabilizing agents. For instance, comparatively higher concentrated salt solution (0.1 M/100 mM AgNO 3 ) and red apple fruits were used for Ag nanoparticle synthesis; and in such case, 20 mL of the red apple fruit extract was added into 180 mL aqueous silver nitrate solution, and heated at 60 °C for an hour for synthesizing silver nanoparticles. Laser Dynamic Light Scattering (DLS) analysis estimated the average size of the spherical shaped nanoparticles which was found to be 30.25 ± 5.26 nm. However, particle size distribution indicated the existence of aggregation [37]. Similar concentration of salt (0.1 M/100 mM) was also used in other study; however, AgNPs were synthesized at room temperature by mixing 5 mL of red apple fruit extract with 50 mL of aqueous AgNO 3 solution, which was examined after 168 h reaction time [38]. The DLS assessment of the synthesized silver nanoparticles showed polydispersity with the particle size range of 50-300 nm. Furthermore, use of red apple as reducing agent for Ag nanoparticle synthesis was also evidenced by another literature. Following drop-wise addition method, 10 mL of red apple fruit extract was combined with 100 mL silver nitrate salt (20 mM) solution, and the reduction of silver ions to nano particles was confirmed in between 18 and 24 h of reaction time. TEM imaging revealed the presence of spherical shaped nanoparticles with dia of 20 nm [39]. In the study of cumin, AgNPs were synthesized from aqueous AgNO 3 solution using C. cyminum leaf extract and they found the maximum rate of synthesis at 240 min after reaction [40]. While comparing with previously used protocols for synthesizing AgNPs by apple fruit and cumin seed extracts, it is the fact that utilization of microwave-assisted green synthesis applied in this study is more rapid, advantageous and easier approach. Additionally, being the fastest process, this protocol also produced so far, the smallest silver nanoparticles from these plant extract, which are distributed uniformly i.e. monodisperse in nature, without any aggregation. Here, the obtained results in this study occurred due to the fact that rapid and uniform heating process during microwave irradiation synthesis facilitated homogenous nucleation and faster capping rate, which significantly influenced the sizes, shapes and dispersity of NPs [22].
Analysis of antibacterial activities of biosynthesized silver nanoparticles
Antibacterial potentials of biosynthesized AgNPs using Cuminum cyminum seed extracts with the inhibition zones of 12.53 ± 0.45 and 10.30 ± 0.36 mm, and AgNPs by Malus domestica pulp extract with the inhibition zones of 10.20 ± 0.30 and 9.90 ± 0.50 mm were found against S. aureus and E. coli, respectively (Fig. 8, Table 1). It is significant that stronger antibacterial activities were demonstrated by the silver nanoparticles with smaller sized particles and higher potential value, which were biosynthesized using cumin seed extract. Morphological and physiochemical properties of nanoparticles are the vital factors for exhibiting their antibacterial potential [41]. Nanoparticles with smaller size distribution have high reactive surface to volume ratio compared to their bulk macromolecules [42,43]. This distinctive feature of NPs might facilitate them to contact and interrelate easily with other particles. Hence, they are capable of interacting with the bacterial cell and trend to show stronger antimicrobial effect [44,45]. Furthermore, the potential values of NPs also influence their bactericidal properties. Nanometallic particles with high potential charge could rapidly bind with surfaces of bacterial cells which might increase of the bactericidal effect as well [46,47].
From a study, it was observed that bactericidal activities of powdered silver nanoparticles under varying concentrations against E. coli and S. aureus were almost identical [48]. AgNPs significantly increased the cell membrane permeability that caused protein leakage. It also induced the formation of bactericidal reactive oxygen species (ROS) which permanently deactivated bacterial respiratory chain lactate dehydrogenase (LDH) [48]. At the same time, the inhibition effect of nanoparticles on S. aureus was more than E. coli in general, and these results were in line with the studies in the literature [49,50]. This study suggested that nano-silver can be a competent antibacterial agent against various pathogenic microbes. At the same time, the use of biologically synthesized silver nanoparticles in many film applications with higher antifungal activities compared to chemically synthesized forms has also been reported. This shows that nanoparticles obtained by green synthesis in many areas, especially in the food industry, will be a promising tool in the future [51].
Cytotoxicity study
The in-vitro cytotoxic effects of both silver nanoparticle samples were monitored against healthy mouse fibroblasts cell line (L929) through XTT cell viability assay. Fig. 9 Cytotoxic effect of phytosynthesized silver nanoparticles on L929 cells; a AgNPs from fresh apple (Malus domestica 'Golden Delicious') pulp extract; b AgNPs from cumin (Cuminum cyminum) seed extract Different concentration of nanoparticles (0, 0.1, 0.25, 0.5, 1, 2.5 and 5 μg/mL) were applied to test the cell viability by observing the activity of mitochondrial enzymes in response to XTT reagent. Mitochondrial enzymes of viable cells can convert XTT reagent into visible orange color, which can be measured through absorbance detection. The absorbance peak of optical intensity is directly proportional to the cell viability, and therefore, the optical density can indicate the percentage of cell viability [19,52]. Figure 9 showed the in vitro cytotoxic effects of fabricated AgNPs by using Malus domestica pulp and Cuminum cyminum seed extracts. The result indicated that the optical density did not decline drastically with the increased concentration of NPs. Hence, in the present study, there are no cytotoxic effects of phytosynthesized silver nanoparticles on regular mouse fibroblasts cell line (L929) at the given concentrations. Nevertheless, it is remarkable that both AgNP samples exhibited antibacterial activities against two important pathogenic bacterial strains (Staphylococcus aureus and Escherichia coli) at very low concentration (1 μg/mL).
In the past few decades, silver nanoparticles have obtained a special interest due to their excellent antipathogenic mechanism [53]. Despite having inadequate information about biological behavior and cytotoxicity of nano-silver, they have been used in the field of cosmetics, clinical diagnosis, biomedical evaluation and revaluation, biotechnology, food processing and some environmental aspects [54,55]. However, by using animal models, several toxicology studies showed the in vitro and in vivo cytotoxicity of conventionally manufactured nanoparticles [56][57][58]. In a previous study, silver nanoparticles (AgNPs) synthesized using biological material were found to be non-toxic to fibroblasts in a wide concentration range (100-1000 μg/ml) and did not compromise cell viability or growth [59]. Also, it was stated in another study that the genotoxicity of biologically synthesized NPs depends on the synthesis parameters, biological source and the test applied [60]. Therefore, productions of nanoparticles without using any hazardous chemicals as well as measuring the cytotoxicity of produced NPs have become the primary and necessary steps before any kind of nano-based applications.
Antioxidant activity of biosynthesized silver nanoparticles
The antioxidant activity of synthesized AgNPs at the increased concentration (10, 20, 40, 80, 160 μg/mL) was evaluated by DPPH and ABTS radical scavenging assay. Trolox was used as a positive control at the same concentration range. The scavenging ability of the nanoparticles for both radicals increased in a dose dependent manner ( Fig. 10). At the highest concentration (160 μg/mL), the recorded DPPH scavenging ability of the biosynthesized AgNPs by cumin seed extract was 27.84 ± 0.56% whereas for the AgNPs by apple pulp extract was found to be 13.12 ± 0.32%. Besides, the inhibition percentage (%) of Trolox was 95.29 ± 0.58% at the same concentration. In previous reports, DPPH scavenging activities of different plants mediated AgNPs were higher than the present study [61,62].
On the other hand, in the case of the ABTS scavenging activity, the inhibition percentage at the concentration of 160 μg/mL was 96.43 ± 0.78% for AgNPs synthesized by cumin seed extract while the inhibition percentage was 78 ± 0.11% for AgNPs by apple pulp extract. In addition, the inhibition percentage at the same concentration was 99.68 ± 0.06 for Trolox.
Both assay percentages (%) indicated that silver nanoparticles which was produced using cumin seed extract and possess smaller particle size demonstrated higher antioxidant activity compared to the AgNPs from apple pulp extract. Moreover, the results revealed that the synthesized nanoparticles by both extracts exhibited higher radical scavenging activity in ABTS assay than they showed in the DPPH assay. This might be possible due to the difference in sensitivity of ABTS and DPPH radicals [63]. These
Conclusion
This study established that the simple microwave-irradiation scheme only with two optimized parameters (time and temperature) is very effective, convenient and advantageous for the biosynthesis of silver nanoparticles using golden delicious apple (Malus domestica 'Golden Delicious') pulp and cumin (Cuminum cyminum) seed extracts. Moreover, using these two plant extracts, this rapid, facile and efficient green synthesis approach were created the smallest and highly stabilized silver nanoparticles, which were found to be monodisperse in nature and without any aggregation. Most importantly, the results of the current study presented that silver nanoparticles possessed the most promising non-cytotoxic mammalian cell behavior with the greatest antibacterial activity offer a rational approach towards their future investigation in a wide range of biomedical and pharmaceutical applications.
|
v3-fos-license
|
2021-11-15T16:06:51.323Z
|
2021-07-08T00:00:00.000
|
244107686
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://crimsonpublishers.com/gmr/pdf/GMR.000627.pdf",
"pdf_hash": "4b7917e990f0ce4e15a227c89db92a8122ca93c2",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46474",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4b02b1ca204f078fdcfb131d808fb77d7c72637c",
"year": 2021
}
|
pes2o/s2orc
|
Laparoscopic Modified Belsey Fundoplasty (Gastroesophageal Valvuloplasty) for relief of Gastroesophageal Reflux Disease: Review of Technique and Results
Nissen fundoplication is by far the most common surgical procedure for the repair of gastroesophageal reflux in the United States. However, Nissen fundoplication is associated with significant complications and a high failure rate. Belsey Mark IV repair creates a valve spanning 270 of the circumference of the gastroesophageal junction by intussuscepting the esophagus into the stomach by 2cm. However, the conventional Belsey Mark IV repair as performed through a left thoracotomy is associated with the morbidity of a thoracotomy. A laparoscopic Belsey repair is a minimally invasive procedure which is associated with excellent relief of reflux. We review the technique and results of laparoscopic Belsey fundoplasty.
Introduction
Gastroesophageal Reflux Disease (GERD) affects approximately 20% of Americans and is the most commonly diagnosed disease of the upper Gastrointestinal (GI) tract in humans [1,2]. Pathologic reflux is associated with esophageal carcinoma [3]. Curiously, despite greater use of acid suppressive therapy esophageal adenocarcinoma has increased by 660% in frequency in the United States since the 1970's [4,5]. Therefore, even with the increase in the use of proton pump inhibitors, the advent of minimally invasive procedures and the promise of prevention of metaplastic and dysplastic changes in the lower esophagus with surgery have resulted in greater interest in a physiologic surgical procedure which would restore the normal antireflux barrier.
Nissen fundoplication versus belsey mark IV fundoplasty
Nissen fundoplication is the most common surgical procedure which is advocated for gastroesophageal reflux. Nissen fundoplication increases the lower esophageal sphincter pressure or high-pressure zone significantly more than a partial fundoplication. However, Nissen fundoplication is associated with significant dysphagia, gas bloat and unpredictable longevity [6]. There are reports of up to 40% for dysphagia and 31% the inability to vomit following Nissen fundoplication [6][7][8]. The Belsey Mark IV repair intussuscepts the esophagus into the stomach by 2cm for 270 degrees of the GE junction thereby creating a valve. Long term follow up of patients undergoing Belsey Mark IV has shown good-to-excellent results in 78-95% of patients. Even though the Belsey Mark IV repair is as effective as the Nissen fundoplication for the relief of reflux, it is associated with a significantly lower rate of postoperative dysphagia and gas bloat syndrome. After Belsey mark IV Fundoplication, gas bloat has been reported in 3% and dysphagia in 3% of patients [9]. However, the conventional A.
It is a conceptually a more complex procedure and is more difficult to teach.
B.
Location, depth, and spacing of each suture for the repair is crucial to favorable results.
C.
The Belsey Mark IV repair is performed through left thoracotomy with the attendant morbidity. D. Until recently, a laparoscopic Belsey procedure has not been possible.
Belsey fundoplasty and the gastroesophageal antireflux mechanism
A significant component of gastroesophageal antireflux mechanism is the highly complex three-dimensional relationship which exists between the gastroesophageal junction and the esophageal hiatus [10][11][12]. The gastroesophageal valve is the 2cm Musculo mucosal fold which extends from the greater curve to the lesser curve of the stomach and is created by the oblique intussusception of the esophagus into the stomach. In turn, the gastroesophageal valve is suspended on the esophageal hiatus by the phrenoesophageal ligament. Suspension of the valve onto the esophageal hiatus prevents kinking and incompetence of the valve in the normal setting. With a hiatal hernia and stretching of the phrenoesophageal ligament, the esophagus gets pulled out of the stomach much like a telescope being extended and the Musculo mucosal fold disappears, resulting in GERD. Curiously through rigorous observation and trial and error, Belsey designed a procedure that would recreate what is presently considered to be the normal antireflux barrier. The Belsey Mark IV Fundoplasty (BF) procedure recreates the gastroesophageal antireflux mechanism by intussuscepting the esophagus into the stomach, thereby creating a gastroesophageal valve and suspending it onto the esophageal hiatus. In a porcine model of GERD, the BF has been shown to most closely recreate the normal antireflux mechanism [13].
Technique of laparoscopic modified belsey fundoplasty (gastroesophageal valvuloplasty)
The patient is placed in the lithotomy position ( Figure 1). The surgeon stands between the legs. Two laparoscopic CO 2 insufflators are used. Port placement is similar to Nissen fundoplication. Port 1 is placed at the umbilicus. A 0-degree Endo eye video endoscope (Olympus Inc.) is used. Pneumoperitoneum is created using CO 2 gas to a maximum pressure of 15mmHg. The table is placed in a reverse Trendelenburg position. Under direct videoendoscopic guidance, four other ports are placed. The 10-12 Vertiport trocar (Covidien/Medtronic Inc., Norwalk, Conn.) is used for all ports. A design advantage of these ports is that the port sites do not have to be closed. Port 2 is placed in the right paraumbilical region at the mammary line. An Endo-Paddle Retractor (Medtronic Inc., Norwalk, Conn.) is introduced through this Port and fixed to the table using a self-retaining clamp system (Medifex, Velmed Inc., Wexford, Penn). Port 3 is placed in the Para umbilical region in the left mammary line. A second Endo-paddle retractor is introduced through Port 3 and used to retract the stomach. Port 4 is placed in the subcostal region halfway between the umbilicus and the xiphoid just to the left of the midline. This port is aligned with the right limb of the right crus of the diaphragm. Port 5 is placed in the subcostal region two finger-breadths to the left and caudad to Port 4. Port 5 is aligned with the left limb of the right crus of the diaphragm. The laparoscopic insufflator is disconnected from Port 1 and attached to Port 4. A second insufflator is attached to Port 5. The use of two high-flow insufflators facilitates rapid extra corporeal knot placement while preserving pneumoperitoneum and exposure of the esophageal hiatus. A 30-degree Endo eye video endoscope is used for the remainder of the procedure. An endo grasper is introduced through Port 4, and endo shears with cautery attachment is introduced through Port 5. The right crural arch is identified. The phrenoesophageal ligament is divided. The hepatogastric momentum is divided and the caudate lobe of the liver is identified. At this point, the Right Limb (RL) of the right crus is visualized. The lateral and medial borders of the RL are identified. The endoscopic Endo paddle retractor is placed between the RL and the esophagus and used to provide lateral traction to the esophagus. The fatty tissue overlying the RL is excised, and the RL is followed inferiorly to its junction with the Left Limb (LL) of the right crus. Next, the dissection of the RL is carried superiorly onto the crural arch and around to the LL of the right crus. The LL is dissected inferiorly by taking down the angle of His and gastric fundal attachments. Lateral traction on the paddle retractor moves the esophagus laterally to the left and facilitates the exposure of the entire crural sling. The "V"-shaped junction between the right limb and the left limb of the right crus of the diaphragm is visualized. This facilitates exposure of the aorta which traverses posterior and deep to this junction through the left diaphragmatic crural sling. Importantly, encirclement of the esophagus or division of the short gastric vessels is not required.
Posterior crural closure
Posterior crural closure is accomplished by re-approximating the RL and LL with two to three sutures. We prefer the Endo Stitch instrument (Medtronic Inc.) with O Ethibond suture (Figure 2). When approximating the RL and LL of the right crus posterior, the Copyright © Farid Gharagozloo GMR.000627. 6(1).2021 straight needle of the Endo Stitch instrument passes in a tangential plain anterior to the aorta and carries a lower risk of inadvertent aortic injury which usually is the result of deep suture placement with a curved needle. A 1cm square absorbable pledget cut from Vicryl mesh (Ethicon, Inc.) is passed through Port 4. The Endo Stitch with O Ethibond is passed through Port 5. Intracorporeally the pledget is loaded onto the needle. The needle is passed through LL and RL, respectively. Next, Intracorporeally the needle is passed through a second vicryl pledget which is introduced with the grasper in the surgeon's left hand. The Endo stitch carrying the suture is withdrawn out of the entry Port 5, and extracorporeal knots are placed using a long external knot pusher. The suture is cut above the knot. This technique is repeated for all the posterior crural sutures.
Anterior crural closure
In a similar manner to the posterior crural closure, 0 Ethibond sutures on the Endo stitch instrument with intracorporeally loaded pledgets of vicryl mesh are used to reapproximate the anterior portion of the crural arch. This step represents a modification of the original Belsey Mark IV technique. However, in our experience, the anterior crural closure allows for the formation of an acute angle at the gastroesophageal junction and recreates one of the important features of the normal antireflux barrier. The sutures are passed through Port 5, a vicryl pledget is loaded on the suture Intracorporeally, and the suture is passed through the RL and LL of the crural arch. A second vicryl pledget is loaded Intracorporeally onto the suture, and the suture is tied using extra corporeal technique as outlined previously. Usually, one to two anteriorly placed sutures are required. The crural closure is sized based on the passage of a 60-French bougie into the distal esophagus. Following crural closure, the Belsey fundoplasty is performed.
Belsey fundoplasty
The intussusception of the esophagus into the stomach is accomplished for the anterior 270 degrees (from RL to LL of the right crus) of the 360-degree circumference of the esophagogastric junction ( Figure 3). The esophagogastric fat pad is removed. When looking at the esophagus directly, position of the sutures is determined by the face of a clock. The esophagus is marked 2cm above the esophagogastric junction (EG) at the 3 o'clock position lateral to left vagus nerve (E1), at the 9 o'clock position just in front of the right vagus nerve (E3), and halfway in between at approximately the 11 o'clock position (E2). The stomach is marked 2 cm below the GE junction at the greater curvature (G1), the lesser curvature (G3), and at a point halfway between G1 and G3 (G2). The Endo stitch instrument with 00 Ethibond is introduced through Port 5. The first Belsey suture (E1 to G1, Greater Curve) passes in a mattress fashion from G1 to E1 and through the diaphragm at the left crural limb (Figure 4-6). A vicryl pledget is introduced with a grasper through Port 4. The suture is withdrawn through Port 5. Metal clips are placed on the free ends of the suture in order to facilitate identification and recovery of the suture at a later point. The untied suture is reintroduced through Port 5 and deposited in the left upper quadrant away from the GE junction. This suture is tied a later time. Placing a tie on the "G1-E1" suture at this time will obscure the precise placement of the "E2-G2" and "E3-G3" sutures. The second Belsey suture (E3 to G3, Lesser curve) is passed in a similar manner from G3 to E3 and onto the diaphragm at the right crural limb. Similar to the greater curvature suture, this suture is withdrawn through Port 5, tagged with metal clips and deposited in the right upper quadrant away from the GE junction. This suture will be tied a later time (Figure 7-10). The third Belsey suture (E2 to G2, midpoint) is introduced in the same manner from G2 to E2 and through the diaphragm at the midpoint of the crural arch. This suture is withdrawn from Port 5 and tied using a knot-pusher and extracorporeal knots. Next, the "E1to G1" suture is withdrawn out of Port 5 and tied. Finally, the "E3 to G3" suture is withdrawn out of Port 5 and tied ( Figure 11-13). Placement of the mattress Belsey sutures results in the intussusception of the esophagus into the stomach by 2cm for 270 degrees (Figure 14,15). Only the camera port needs to be closed. This trocar site is closed using a laparoscopic suture passer and 0 Vicryl (Ethicon Endo-Surgery). CO 2 is evacuated from the highest trocar by placing the patient in a steep reverse Trendelenburg position. The other Vertiport trocars are removed, and the tissues are allowed to close around the introducer sheath. Subcutaneous tissues are closed with 00 Vicryl, and the skin is closed with staples.
Result
During a 71-month period, 302 patients underwent robotic GE valvuloplasty. Eleven patients (3.6%) were lost to follow up. In the remaining 291 patients, there were 156 men and 135 women. The mean age was 51 +/-14 years. Indication for surgery was failure of medical therapy in 212/291 patients (73%) and upper respiratory symptoms (cough, hoarseness, bronchospasm) in 79/291 patients (27%). On upper GI endoscopy, 183/291 patients (63%) had a Hill Grade IV GE junction, 73/291 (25%) had a Hill Grade III GE junction, and the reaming 35/291 (12%) patients were graded as Hill Grade II. On contrast esophagography, 230/291 patients (79%) were diagnosed with a hiatal hernia. All patients had normal esophageal motility and increased acid exposure. At the time of surgery, exploration of the esophageal hiatus revealed a hiatal defect in 269/302 (87%) patients. The mean operative time was 130+/-52 minutes. In 19/291 patients (6%), the intraoperative post-fundoplication endoscopy revealed an incompetent gastroesophageal valve. In these patients, the valve underwent further repair in order to obtain a satisfactory Hill Grade I score. Complications were seen in 61 patients (21%). The pleura was entered in 55/291 (19%) patients. This was treated with closure of the pleural opening and intraoperative evacuation of the pleural space. There was no conversion to an open procedure. A pneumothorax was diagnosed in 5/291 patients (1.7%) postoperatively. These patients underwent drainage with a 10-French radiographically placed pig tail catheter. One patient had atrial fibrillation (0.3%). There was no mortality. Mean hospitalization was 2.8+/-1.7 days (median 2 days). Early (1-12 Weeks) Postoperative Results: Immediately after surgery, 221/291 patients (76%) reported dysphagia to solids. Dysphagia had resolved in all patients by the third postoperative week. There was no incidence of gas bloat in the early postoperative period. By 12 weeks, acid suppression therapy was discontinued in all patients. Of the 291 patients, 5 patients (2%) had transient gastroparesis, which resolved by the third postoperative month.
Late follow-up
Mean follow-up was 85 +/-7 months. At the time of follow up, the mean score on the SSQ decreased from 8.3+/-0.6 to 0.7 +/-0.2 (p < 0.05) Of the 291 patients, 279 patients (96%) scored 0 on the questionnaire and were completely asymptomatic. The remaining patients (4%) had some degree of heart burn and continued acid suppression therapy. Longterm gas bloat was not reported by any patient. Preoperatively 58/291 patients were objectively graded as Visick III, and 231/291 patients (80%) were Visick IV. At the time of follow-up, 276/291 patients (95%) were graded as Visick I and 5% as Visick II. The hiatal hernia recurred in 8/291 patients (2%). Recurrence was documented on upper GI endoscopy and contrast esophagography. In all instances, the recurrence was in the anterior aspect of the esophageal hiatus. There was no posterior crural disruption.
|
v3-fos-license
|
2015-12-23T19:26:37.933Z
|
0001-01-01T00:00:00.000
|
6412357
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1364/oe.22.020164",
"pdf_hash": "f495fc07547ba1128030e59f80d13db8539952d2",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46476",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "f495fc07547ba1128030e59f80d13db8539952d2",
"year": 1996
}
|
pes2o/s2orc
|
Pose Estimation Using Time-resolved Inversion of Diffuse Light References and Links
We present a novel approach for evaluation of position and orientation of geometric shapes from scattered time-resolved data. Traditionally , imaging systems treat scattering as unwanted and are designed to mitigate the effects. Instead, we show here that scattering can be exploited by implementing a system based on a femtosecond laser and a streak camera. The result is accurate estimation of object pose, which is a fundamental tool in analysis of complex scenarios and plays an important role in our understanding of physical phenomena. Here, we experimentally show that for a given geometry, a single incident illumination point yields enough information for pose estimation and tracking after multiple scattering events. Our technique can be used for single-shot imaging behind walls or through turbid media. Visualization of a lost painting by Vincent van Gogh using synchrotron radiation based x-ray fluorescence elemental mapping, " Anal. Image transmission through an opaque material, " Nat. Correction of atmospheric distortion with an image-sharpening telescope, " J. Coherence-based imaging through turbid media by use of degenerate four-wave mixing in thin liquid-crystal films and photorefractives, " Appl. Two-dimensional imaging through diffusing media using 150-fs gated electronic holography techniques, " Opt. Transillumination imaging performance: a time-of-flight imaging system, " Med.ing with nature: A universal analog compressive imager using a multiply scattering medium, " arXiv:1309.0425 (2013). Recovering three dimensional shape around a corner using ultra-fast time-of-flight imaging, " Nat. Estimating wide-angle, spatially varying reflectance using time-resolved inversion of backscattered light, " J. Rotation-invariant target recognition in ladar range imagery using model matching approach, " Opt. 3D pose estimation of ground rigid target based on ladar range image, " Appl. Distortion-tolerant 3D recognition of occluded objects using computational integral imaging, " Opt. Fluorescence lifetime based tomography for turbid media, " Opt. reconstruction of absorption and scattering images by multichannel measurement of purely temporal data, " Opt. Single-molecule localization super-resolution microscopy: deeper and faster, " Microsc.
Space Time
Fig. 1.Light propagation (left) and synthesized streak camera image (right).Coherent light (A) hits the diffuser in one position, scattered behind the diffuser (B) hits the object, scattered once again from many positions (C), hits the diffuser a second time, and reaches the streak camera (D) which measures one spatial line over time, generating a 2D image.Window: 1ns × (≈)10 cm.oped to mitigate scattering effects.Usually, for strong scattering or volumetric imaging, these methods require multiple measurements, raster scanning, or time scanning.The consequent increase in acquisition time is often impractical for dynamic scenes of interest.
On the other hand, scattering is a natural "information mixer," with each scattered wave containing information about the entire scene [18].In this respect, scattering can be exploited (rather than mitigated) for object estimation using far fewer measurements than needed in existing techniques.This intuition is especially useful when only recovery of low-dimensional data is required.In such cases, the problem is simplified to estimation of several unknown parameters, such as position and orientation, and has many applications in pose estimation, 3D tracking, and target recognition.However, relatively little work has studied pose estimation through turbid layers.
Here, we demonstrate a computational method for estimating the six degrees of freedom (DOF), the 3D location and the orientation, of rigid objects hidden behind a diffuse layer.The method relies on non-invasive, time-resolved sensing of scattered light.We extend previous time-resolved imaging methods using diffuse reflections [19,20] or scattering [21] using a streak camera.Coupling prior information about the object geometry with the inherent mixing that scattering provides, we show that the acquisition ultimately requires only a single illumination point, rather than a raster scan.Our work here extends the technique by recognizing that a full image reconstruction is often unnecessary, i.e., that recovery of fewer unknowns (e.g., location and orientation) in a low-dimensional data system is sufficient for many applications.This allows for (1) a simplified image acquisition without raster scanning and (2) a different numerical optimization algorithm for robust recovery.
This scenario is applicable in several areas where geometry either is captured through other modalities or is known a priori.Therefore, one-shot localization has potential in locating individuals in hazardous environments (such as natural disasters or war zones), detecting objects remotely [22][23][24][25][26], tracking tumors or organs over time with with less radiation exposure, or improving inversion in time-resolved diffuse optical tomography or fluorescence tomography [27,28].
Schematically, the method is shown in Fig. diffuse layer, which scatters light (B) toward an object (C).Subsequently, light is scattered from the object back toward the diffuser, where it is then imaged (D) onto a time-resolved detector.
The orientation and location of the object determines the path length of the scattered light and hence the intensity time profile of each sensor pixel.Therefore, as the object moves and rotates, the time-resolved scattered light image will change accordingly.The question we answer is the following: given a single illumination point, is it possible to recover the orientation of the object and track its motion through space?By considering the six-dimensional space consisting of 3D translations and 3D rotations, our problem thus changes from object reconstruction to optimization over the space of possible transformations.The rest of the paper is organized as follows.Section 2 describes the forward light transport model, and Section 3 outlines the associated optimization method.The numerical algorithm is detailed in Section 4, follows by a presentation of results in Section 5 .Features and extensions of the method are discussed in Section 6. Section 7 concludes the paper.
Forward light model
We model scattering and propagation in the geometric approximation, assuming no occlusions, with the geometric factors shown in Fig. 2 and outlined in Table 1.A pulsed laser (L) is focused onto a thin diffuser (D) to a point d l , scatters towards a given object point w and back towards the diffuser at point d c .A one-dimensional, time-resolved sesnor observing the diffuser will record an image I l (d c ,t).For each laser position d l , we have [21] , where c is the speed of light, and g(d l , w, d c ) is a physical factor that depends on only the system geometry and the diffuser properties.f a (w) is the albedo of the object point w and I 0 is the the laser intensity.Angles (α, β , γ, δ , η, and ζ ) and distances (r l and r d ) are defined by the coordinates w, d l , and d c , as shown in Fig. 2. N(•) = exp (• − µ) 2 /σ 2 is the Gaussian scattering profile of the diffuser, for scattering angle and mean σ and µ, respectively.The delta function restricts all recorded light propagating along the path d l → w → d c to arrive at the same instant.Note here that, although we treat scattering in a single layer, multiple scattering primarily affects the signal-to-noise (SNR), rather than the time resolution, for the parameter space considered here [21].
Pose estimation
For a given experimental geometry, we model a rigid object M by a set of |M| uniformly spaced points w ∈ M ⊂ R 3 .The recorded image is determined by the location and orientation of M, which can be parametrized by six unknowns, Θ: where t * is the relative translation in the * direction, and θ , φ , ψ are the three Euler angles, measured about the center of mass C = 1 |M| ∑ w∈M w.We define the repositioning function L Θ : where R is the rotation matrix constructed from the Euler angles, and t = (t x ,t y ,t z ) is the translation vector.Similarly, Because not all points w are illuminated, we define the frontal projection π l (M) ⊆ M as the points in space that are illuminated directly by diffuse light from laser position d l .Every point p ∈ π l (M) generates a hyperbolic curve in the streak image, and their sum is the total expected image, I l (d c ,t).We define the image of a point w generated from laser position l as S l (w).
Our goal is to recover the pose that best agrees with the measured data.Hence, given a set of streak images I l and an a prior on the geometry M we search for the unknown pose Θ where ρ(A, B) is the "stress," a measure of agreement between two quantities A and B. Here, we used the L 1 metric between images, i.e., ρ(A, B) = ||A − B|| 1 .
Optimization scheme
Equation ( 5) resembles the Iterative Closest Points (ICP) paradigm [29,30], where rigid motion is estimated in alternating steps of correspondence search and energy minimization to find the optimal transformation of an estimated point set into a reference one.However, our system is not amenable to the ICP algorithm because of the added complexity due to the time dimension: each object point generates a space-time hyperbola (S l (w) above) [19].Thus, overlapping space-time curves and noise pose new challenges than those encountered in ICP.Consequently, we do not minimize the spatial location between matching points, but instead compare their nonlinear space-time functions.
To solve the Eq. ( 5), we use a stochastic gradient descent numerical optimization approach [31], in which each parameter is minimized separately.For each of the six parameters, we synthesize, according to our forward model, the expected streak images for two optional positions: one above and one below the current value.The step size is dynamic and is reduced for each subsequent iteration.In each step, 13 images, two options per unknown plus the identity, are synthesized, and the best option that reduces the stress is chosen.The procedure was terminated when the stress reached a threshold, or when the change in parameters was infinitesimal.In each iteration, π l should be re-evaluated, but in our experiments there was no need due to the narrow geometric structure of the objects in usage.The procedure is summarized in Alg. 1.
Algorithm 1: Localization algorithm
Input: Scene geomety.( A model M / Laser point locations L / inner parameters of the diffuser, camera, and laser ) Output: Θ = t x ,t y ,t z , θ , φ , ψ .Position and orientation of the model in space.
Pose estimation
We performed 2352 synthetic experiments for different spatial positions and orientations, and we evaluated the success rate and convergence quality.For an object, we chose three characters, "M," "I," and "T," all lying in the same plane.See Fig. 1.For this object we generated 7 different spatial translations and rotated each of the Euler angles 7 times in incremenets of 20 • , for a total of 49 poses.We evaluated each location and orientation using a single laser position, sets of three positions (per row and per column), and all 9 positions together, as can be seen in Table 2.The set of laser positions determines the summation over l in Eq. ( 5).We repeated each experiment three times with different values of SNR (100, 75, 50 using awgn MATLAB code), which corresponds to both spatial and temporal intensity noise.Table 2 shows the mean norm error between the correct data and the calculated one, after removing outliers that converged to local minima far (5 • , 1 cm in the X direction (depth), and 2 cm in YZ plane) from the real shift.Each column in the table represents a different translation, and each row a different orientation.Each cell contains the averaged mean error of 48 instances of the corresponding parameter estimate (the 16 different sets of laser positions times the 3
Laboratory experiments with streak camera
The experimental setup is photographed in Fig. 4 and sketched in Fig. 5.A mode-locked Titanium:Sapphire laser emits pulses of duration 50 fs at a repetition rate of 80 MHz with average power of 1 W. The pulse train is focused and directed via two computer-controlled galvo mirrors onto a 10 × 10 cm 2 holographic diffuser (Edmund Optics, 65-886), which has a scattering angle of approximately σ = 8.8 • .The objects behind the diffuser are Lambertian.The front side of the diffuser is imaged onto streak camera (Hamamatsu C5680), which records a horizontal line view of the diffuser with a time resolution of 2 ps.The galvo mirrors relocate the incident laser to different spatial positions, and a streak image is recorded for each one, though for most experiments, we use only one laser position.To improve SNR, the exposure time for each streak image is 10 s (so that each image averaged 10 s × 80 MHz = 8 • 10 8 pulses).The estimated pose is compared to the object's ground truth pose, which is measured with a FARO digitizer arm.We measure three points on the base of the shape and calculate the Euler angles and positions accordingly.The object volume was approximately 10 × 5 × 1 cm 3 (height width depth) and was located 18 cm from the diffuse layer.The diffuser itself was placed about 0.5 m from the streak camera.The space-time (x-t) pixel resolution of the streak camera is 672 × 512.
In addition to the galvo-controlled signal beam, a beamsplitter was used to create a separate calibration beam, which was focused onto the diffuser in the field of view of the camera.This allows a continual reference measurement to correct for any timing or intensity noise in the laser itself.Because a first-principles calculation of the optimal point source location is beyond the scope of the manuscript, we measure streak images from multiple incident laser spots.But, we show that only using one in the algorithm allows for successful pose estimation.We performed two separate experiments using a human-like figure.The first was dedicated to estimating angular changes and the second for estimating translations.For the angle estimation, we searched for 3 different angles rotating around the vertical axis, orthogonal to the table.We used 3 different laser positions and evaluated the angle with 7 different sets of measurements (3 individual laser positions, all pairs of laser positions, and all three laser positions).We compared our results with the object's ground truth measurement, which has a precision of approximately 3 mm.We rotated the object clock-wise by 16 and 32 degrees and tried to recover those angles.
For translation measurements, we used a similar procedure for 3 different locations of the object.We moved the object by 25 mm, both in-plane and out-of-plane.We searched for only the unknown parameters (one rotating angle for the first experiment, and three translation vectors (one for each location) for the second.
A summary of estimation of the angular parameter is shown in in Table 3.We used the angle measured in one orientation as a reference (R) and measured the angles of the second and third orientations (A and B, respectively) relative to it.The reference orientation provides a calibration measurement to compensate our light transport model for small changes (≈ 1 • ).
The bottom row of laser positions (1,4,7 in Fig. 5) provided the best results, less than a 1 • error for the first comparison and less than a 2 • error for the second.This is within the error bounds of our true data measurements using the robotic arm.The top positions gave the worst results but still less than a 3 • error.Options 4 through 7 in the table show different variations of laser positions and error values.Because we do not know a priori which location of laser spot will provide the best results, we conclude that using all three is the best approach.Figure 6 shows the acquired streak images and steps 1, 5, 10, and the last step (from right to left) in the optimization procedure.In the first row, we see the convergence for position A, and in the second row for position B, both using only laser position 1 (see Fig. 5).We recorded 50 sample points of the object with the robotic arm.Because it is a coarse, noisy version of the model, it yields slight differences between the synthesized images and the measured ones. ,456) ,456) The translation results are shown in Table 3. Once more, we used one location as a reference R, and measured the other two (C and D) accordingly.The depth (X axis) is calculated with high accuracy, below 1 mm, which is within the error bound of our ground truth data.Movement parallel to the diffuser is harder to evaluate and differs for each laser position.With all three positions used, we achieved approximately 2 mm accuracy along the Y axis (orthogonal to the table) and between 0.5 cm and 1.5 cm along the Z axis (parallel to the table).In Fig. 7, we present the corresponding convergence images.The first row corresponds to position C, and the second row to position D, both given laser position 1 (see Fig. 5).
Discussion
Generally, our pose estimation method yields good results for a single incident laser position, though more positions provide angular diversity and improves the result.Interestingly, stronger/multiple scattering is a benefit, because the increasingly scattered light provides wider and more uniform illumination compared to that from a diffuse layer.Nevertheless, a firstprinciples derivation of the minimum number of streak images must be performed to understand the role of noise in the optimization problem.The generalization to thick turbid media is also possible, provided that time resolution of detector can overcome any resulting temporal blurring.For biological samples of moderate thickness, the streak camera resolution will suffice [21].This can be useful for dynamic scenes, such as tracking the flow of blood cells [32], and can be integrated with super-resolution fluorescence microscopy techniques in thick media [33], assuming the temporal profile still holds enough information.
Additionally, because no raster scanning is needed, the acquisition time is fast.Although we average over a pulse train for improved SNR, the method is amenable for a single incident pulse if, for example, the laser source is amplified, or the camera sensitivity is increased.Coupled with scaled versions of the method [35,37], this technique is thus readily applicable for longrange imaging after appropriate calibration [36].Interestingly, recovery is unique up to certain ambiguous symmetries.In Fig. 3 we see degragated accuracy near the symmetric axes (every 90 degrees).However, the symmetry can be easily broken by adding a second illumination source.Moreover, note that for tracking, we utilize prior information on the pose (calculated from the previous step) for convergence to the correct solution (cf. the second column from the left versus the second column from the right in Fig. 3 ).
Conclusions
In conclusion, the spatial location and angular orientation of objects hidden behind turbid layers can be estimated given a known prior of the shape and a single illumination point, with accuracy below 3 degrees, about 1mm in depth, and several millimeters in-plane, for the provided setup.Unlike previous work in the time-resolved inversion, we consider here a small number of incident illumination sources coupled to a mathematical optimization technique to retrieve the low-dimensional structure such as location and orientation of objects.In practice, even a single illumination point provides enough information, up to certain symmetries, for successful recovery.The method relies on time-resolved sensing and treats scattering as a benefit, rather than as a hindrance, and it can be integrated with other time-resolved methods of motion detection in cluttered environments.Future work includes: improving robustness to handle larger displacements, extending the method for integration in remote sensing applications, and considering volumetric scattering for biological applications.
Fig. 2 .
Fig. 2. Left: Pattern of light ray traveling from the laser L, through a diffuser D at point d l , striking an object at point w, bouncing towards the diffuser at point d c , and captured by the camera.Right: Spatial orientation of objects for the three Euler's angles.
2
Set sT = 30.Set sA = 30π 180 .(Chosen arbitrarily) 3 Calculate π l (L Θ k (M) for every l ∈ L by choosing only the visible points with relation to laser position l.
Fig. 4 .
Fig. 4. The human model behind the diffuser.A photograph of the experiment from the model's point of view.
Fig. 5 .
Fig. 5. Scene setup for a human-like model.We used several laser spots for different positions and orientations of the model in space.Bottom row: raw images captured by the streak camera (positions 1,2 and 3 from right to left).The top-left bright spot is a physical reference beam used for spatio-temporal alignment.Window: 1ns × (≈)10 cm.The source is split into two beams with a beamsplitter.The small focused spot in the upper-left corner of each measurement is the calibration beam, used to normalize the global intensity of the data and correct for any timing jitter.The second beam scatters through the diffuser toward the object, and is scanned across the diffuser with a galvo mirror to generate multiple streak images.
Fig. 7 .
Fig. 7. Convergence of the algorithm for laser position 1 (see Fig. 5).First row: translation C. Second row: translation D. From right to left: initial iteration, 5th iteration, 10th iteration, final generated model, and real captured image.A threshold was added to remove noise.Window: 1ns × (≈)10 cm.
Table 2 .
Depth (X), in planar (YZ), and angular convergence for the synthesized MIT banner given 2352 experiments.Each column in the table represent a different translation, and each row a different orientation.In each cell, we averaged the mean error of 48 evaluated distances or angles (16 different laser spots locations times 3 different SNR levels), and the percentage of experiments that converged to a minima near the correct solution (right member of each cell).Depth (X) can be recovered with high accuracy, and in-plane translations (YZ) can be recovered in most circumstances.Angular changes which have strong spatial shift (ψ for the MIT banner) are found with high accuracy, while other rotations are harder to evaluate.
Table 3 .
Translation (in mm) and angular (in degrees) of real experiments.Distances and angles are measured with reference to one spatial setup (R).While not mandatory, it provides an additional alignment.In the top table, we see that one laser position is sufficient for angle evaluation to within 1-2 • , and from the tables below, we learn that depth (X) can evaluated with high accuracy, with in-plane movements (YZ) are harder to find.
|
v3-fos-license
|
2019-04-14T13:05:12.115Z
|
2016-01-29T00:00:00.000
|
113110929
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40544-016-0101-2.pdf",
"pdf_hash": "e52b5980b4c88403394034302fd6711e862bdc74",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46477",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "ab770f784c463faada6dbf79ddbf042a8ea4ec17",
"year": 2016
}
|
pes2o/s2orc
|
On the risks associated with wear quantification using profilometers equipped with skid tracers
In this study a wear track was generated on aluminium by rubbing it against a hard steel ball. The generated wear track has a typical depth of 50 μm and exhibits marked ridges on its borders. The cross section profiles were measured using two different stylus profilometers equipped either with skidless or skid probes and compared to a skidless reference instrument. It was found that the use of a skid probe can introduce significant distortion of the measured wear track profiles and thus errors in wear quantification. The reason for that is attributed to the presence of the ridges that, by elevating the skid, alter artificially the reference height used for profile measurement.
Introduction
Measurement of the wear track volume using profilometers is a widely used technique for quantifying wear. This method is of high sensitivity and of rather simple use. Several types of instruments, such as white light interferometers, scanning laser or triangulation optical sensors and stylus profilometers are commonly used in tribology practice. Stylus profilometers offer a number of advantages compared to non-contact optical instruments. In these instruments, a small stylus scanned across the sample senses the surface. The surface profile is determined by continuously recording the vertical movement of the stylus with respect to a reference height. Stylus profilometers are immune of artefacts derived from local variations in surface optical properties due to deep valleys, large slopes or multiphase materials that may affect optical sensors [1]. Further, stylus profilometers can be commercially obtained as compact, cost effective instruments. Such instruments are particularly suitable for the determination of the wear track volumes generated during laboratory tribological tests. For this, typically cross section profiles are measured perpendicularly to the sliding direction. The cross section area can be determined by integrating the void area below the original profile height (the surface level before rubbing) over the width of the wear track [2]. The wear volume can be calculated by multiplying the cross section area by the length of the wear track [2]. Among the small stylus instruments, the so called slid tracer has been recently proposed as particularly cost effective wear measurement instruments. While classical (skidless) tracer measures the height of the stylus tip with respect to an instrument internal reference, skidded sensor uses as reference a skid (of much larger dimensions than the stylus) that contacts the surface and moves aligned to the stylus (Fig. 1). While the skidless probes sense both waviness (long range profile features) and roughness (short range profile features), measuring using the skid as reference levels out the waviness of the sample [3]. In the case of wear track measurement, the suppression of the sample waviness could constitute an advantage provided that the unworn surface is flat and smooth. This is in theory the case as laboratory samples are usually fine polished prior to wear tests. However, wear often leads to the formation of ridges on the borders of the sliding track. Since the skid senses the ridges, the reference height, corresponding in principle to the surface level prior to wear, becomes distorted. This may potentially introduce errors in wear quantification. On the contrary, skidless profilometers measure the entire profile heights including the ridges.
Thus, this study was initiated with the aim to verify to which extent skid tracer profilometers may introduce artefacts in wear quantification. For this an ad-hoc generated wear track was characterised using two commercial profilometers equipped with either a skidless tracer or a skid tracer. For comparison, reference measurements were taken with another commercial skidless stylus instruments. Obtained wear track profiles and wear data are compared and discrepancies are discussed.
Materials and methods
Tribological test: a wear scar was produced on a flat steel sample by rubbing against an alumina ball animated by reciprocating sliding. The tribometer used was a Tribotechnic Tribotester Model 200 N. The contact configuration involved a static aluminium plate against which a bearing steel ball (DIN 100Cr6, diameter 12.7 mm, roughness AFMBMA G10) was sliding in reciprocating alternate motion (sinusoidal motion with frequency 10 Hz, amplitude 4 mm). The applied normal load was 150 N and test duration was 900 s corresponding to a sliding distance of 72 m. The contact as lubricated with a grade 5W-30 oil and maintained at a temperature of 130 °C .
Height profiles were measured on the wear track perpendicularly to the sliding direction at distances of ½ of the scar length starting from one end of the scar (Fig. 2). The positioning of the stylus in the centre of the wear scar occurred manually and was thus affected by some uncertainty estimated to be less than 0.2 mm. In order to check for the influence of this uncertainty on the final outcome, the measurement was repeated using the same instrument (Profilometer 1) by repositioning at each time the stylus. For comparison, the same measurements were repeated but without repositioning the stylus on the sample.
The used instruments and the corresponding parameters are listed in Table 1. The skid profilometer was run at two distinct profile lengths to evaluate the effect of distance on waviness suppression by the skid. The distortions are due to the relative difference in height between the stylus and the preceding skid that follow the same profile but at shifted positions. For example the initial descent (from left to right) of the profile (d) can be attributed to the climbing of the skid on the left ridge generating an apparent descent of the surface. The changes of reference height (the skid) in the course of a measurement clearly yield a distorted profile that does not represent the real surface profile and thus can hardly be used for wear quantification as shown in the next section.
Quantitative aspects
For the appraisal of wear it is necessary to quantify the extent of the wear track as well of the ridges. The difference of both yields the amount of removed material, i.e., the amount of wear. The quantification was carried out on the measured profiles by first levelling the profile to compensate for possible misalignment between the sample surface and the translation direction of the probe. Afterwards, the points of the horizontal axis delimiting ridges and track were manually selected and the corresponding area surfaces were calculated by integrating the profile height over the length interval delimited by the selected points. Figure 4 shows representative examples of quantification for the profilometer 2 and 3 (same brand) with or without skid. Discrepancies exist between the dimensions of the wear track both in width and depth. The left and right ridges are symmetric only in the case of the skidless probe ( Fig. 4(a)). The skid probe ( Fig. 4(b)) yields a distorted and enlarged left ridge compared to the right one.
The wear scar cross section area (displaced material, red surface in Fig. 4) measured using the different instruments are compared in Fig. 5. This area is proportional to wear. Figure 5 shows that the quantification of cross section area using the same stylus profilometer is a robust method yielding reproducible values. The exact positioning of the sample under the profilometer seems not to be the most crucial factor affecting results scattering: indeed profiles (c) and (d) in Fig. 5 were measured as the same location but nevertheless exhibit similar differences cross section area as profiles (a) to (c) measured when repositioning the sample at each time. Different skidless profilometers yield slight variations in cross section area probably because of differences in tip geometry, sensitivity of the measurement electronics and calibration procedures. Not surprisingly considering the previously described distortions of the profile introduced by the skid is the much larger discrepancies introduced by the use of a skid: in case of the brand C instrument the skid profilometer underestimates the cross section area (and thus wear) by 25% while the brand B skid instrument overestimates wear by more than 25%.
Conclusions
These study shows that the use of a skid probe for measuring cross section profiles of worn surfaces characterised by ridges on either sides of the wear track can introduce significant distortions of the measured wear track profiles and thus errors in wear quantification. The reason for that is attributed to the presence of ridges that shift up the skid position and thus alter artificially the height reference of the skid probe.
Open Access: The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Fig. 4 Quantification of wear track (red) and ridges (green) cross section areas for profiles measured using profilometer 3 without a skid probe (a) and profilometer 2 with a skid probe (b).
|
v3-fos-license
|
2021-10-28T15:19:33.887Z
|
2021-07-08T00:00:00.000
|
240018339
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ecology.dp.ua/index.php/ECO/article/download/1103/1059",
"pdf_hash": "574fd191f2d7788bf51d6ec539cbb1a1e0205bae",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46479",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "cf49f9e0063aca5192e7137a31f7a51c19ef9d97",
"year": 2021
}
|
pes2o/s2orc
|
Changes in the structure and dominance of the zooplankton community of the Kremenchuk Reservoir under the effect of climate changes
National University of Life and Environmental Sciences of Ukraine, General Rodimtsev st. 19, Kyiv, 03041, Ukraine. Tel.:+38-067-274-09-17. E-mail: rudyk-leuska@ukr.net Kruzhylina, S. V., Buzevych, I. Y., Rudyk-Leuska, N. Y., Khyzhniak, M. I., & Didenko, A. V. (2021). Changes in the structure and dominance of the zooplankton community of the Kremenchuk Reservoir under the effect of climate changes. Biosystems Diversity, 29(3), 217–224. doi:10.15421/012127
Introduction
The Dnipro reservoirs are a unique man-made objects, the ecological state of which develops under the impact of a complex of factors of external and internal origin. Due to the significant ecological and economic importance of reservoirs, the response of their ecosystems to this impact should be the subject of monitoring studies, which would include both assessment of integrated biocoenotic parameters and determination of individual groups of aquatic organisms, which, in particular, can define the direction and intensity of the succession status of a water body. In this aspect, zooplankton is of great interest. This group of organisms, which in reservoirs occupy mainly niches of first-or second-order consumers, is a critical component of food supply for fish juveniles and one of the main links in water self-purification (Yermolaeva, 2008). Accordingly, without having a holistic picture of the state of zooplankton and the factors that affect it, it impossible to understand the mechanisms of the impact of external factors on the biodiversity, ecological sustainability and bioresource potential of aquatic ecosystems.
Studies of climate change have become especially important in recent years as a powerful factor affecting the conditions of both individual species and biocenoses as a whole. This fully applies to aquatic organisms, habitats of which are more stable (compared to terrestrial ecosystems), but the variability of global climate indices has effects on the production and degradation processes in aquatic ecosystems. In particular, an increase in air temperature causes increased atmospheric circulation and intensification of the rise of nutrients in the upper layers of water. In turn, this results in enhanced photosynthesis as a basis for the formation of a higher trophic status of the aquatic ecosystem and increase in fish productivity (Sokolov, 2010). The main responses of freshwater zooplankton to climatic influences are considered to be changes in the abundance, distribution and structure of zooplankton communities (Vadadi-Fulop et al., 2012). Changes in zooplankton abundances in response to rising average temperatures were shown for some reservoirs and lakes (Jeziorski et al., 2016;Korneva et al., 2019); however, in other lentic systems, the dynamics of zooplankton abundances in the context of climate change did not show clear trends (Carter & Schindler, 2012;Fomina & Syarki, 2018). In this regard, there is a question about the impact of these changes on the structural and functional parameters of main groups of aquatic organisms as a component of the study of patterns of transformation of aquatic ecosystems in conditions of regulated river flow and multi-vector impact of external factors (Romanenko et al., 2019).
In addition, human impact remains a significant factor that affects aquatic bioresources. For example, intensive anthropogenic pressure reduces the intensity of development of zooplankton groups and results in a change of oligo-and polydominant groups to monodominant ones and instability of average annual values with a general tendency to a decrease (Zimbalevskaya, 1989;Pashkova, 2003;Kruzhylina & Didenko, 2007;Pashkova, 2010).
In general, ecosystems of reservoirs are under the continued effect of a complex of external factors, some components of which are characterized by instability and multi-vector nature (Zimbalevskaya et al., 1987;Shcherbak et al., 1991;Shcherbak & Yemel'yanov, 2002). Accordingly, data showing the state of aquatic organisms are sufficient to develop an optimal scheme of rational water use with the level of objectivity, which can be obtained only in the framework of the implementation of a continued monitoring system. Thus, in conditions of continued changes in both the hydrological regime of the reservoirs and human impact on its ecosystem, there is a need for assessing the dynamics of macroindicators of communities, which form a significant segment of flows of matter and energy in aquatic ecosystems.
Material and method
Material for the study was collected on a boat throughout the entire area of the Kremenchuk Reservoir in August of 2006, 2010-2013 and 2020 using a permanent network of sampling stations (30 points). Zooplankton was collected by a conical Judy net (opening diameter 25 cm, mesh size 125 µm) by vertically hauling it from the bottom to the water surface. Collected zooplankton samples were placed in 100 ml glass bottles and preserved in a 4% formaldehyde solution for further laboratory processing. Water temperature at the sampling sites was measured using an electronic thermometer.
In the laboratory, invertebrates from zooplankton samples were identified to the lowest possible level and counted under a microscope (x40-100) in a counting chamber. Zooplankton abundance data were expressed in density (ind./m 3 ). Individual weights of organisms were estimated using published length-weight regression relationships (Mordukhai-Boltovskoi, 1954;McCauley, 1984;Watkins et al., 2011). Zooplankton biomasses were expressed as g/m 3 .
Saprobity values for individual species were taken from tables of saprobity indices (Gubacek, 1977). Saprobity indices were calculated using the Pantle and Buck method according to Sládeček (1985).
Zooplankton abundances, biomasses and saprobity were analyzed for the total area and separately for three parts of the Kremenchuk Reservoir: upper, middle, and lower, where the upper part is a lotic shallow reach, which extends from Kaniv to railway bridge near Cherkasy; the middle part is a limnetic reach with an average depth of about 2 m, which extends from the railway bridge to the line Adamivka-Zhovnyno; the lower part is a limnetic reach with an average depth of about 10 m, which extends from the line Adamivka-Zhovnyno to the dam of the Kremenchuk Hydroelectric Power Plant.
Classification by Rogozin et al. (2015) was used to separate zooplankters in relation to the temperature: cryophiles (indicator weight 0.75 < t ≤ 1.50), thermophiles (1.50 < t ≤ 2.25) and thermobionts (t > 2.25). Linear regressions were used to determine the relationship between abundances of most abundant groups of zooplankters and water temperature. One-way ANOVA with post-hoc Tukey-Kramer test was used to compare mean abundances of copepods, cladocerans, and rotifers in different years. The normality of data distribution was assessed using the Kolmogorov-Smirnov test. Abundances of zooplanktonic organisms were log-transformed to meet the assumptions of normality. Continuous variables were presented as means and standard errors (x ± SE). Calculations and statistical processing of data was conducted in JMP IN 10 (SAS Institute).
Results
During the studied period, significant year-to-year fluctuations of zooplankton abundances and biomasses in the Kremenchuk Reservoir were observed, the total abundances varying 23•10 3 to 256•10 3 ind./m 3 and biomasses 0.14 to 2.11 g/m 3 (Figs. 1 and 2). In the analyzed samples, most important zooplankters by abundance and biomass throughout almost all years were cladocerans, except in 2020, when a change of dominant groups was observed and rotifers became most abundant. The least pronounced year-to-year fluctuations were recorded for copepods (CV was 38.9% for the abundance and 64.4% for biomass). No significant trends were detected between the abundances and biomasses of copepods and rotifers and water temperature during the study period ( Fig. 1 and 2).
A significant relationship was observed between the abundance of cladocerans (log-transformed data) and water temperature (linear model: P = 0.035, Fig. 3). No significant relationship was found for zooplankton biomass.
Fig. 1.
Year-to-year dynamics of the abundance of zooplankton organisms (x ± SE) and water temperature in the Kremenchuk Reservoir by years Zooplankton species diversity varied in different years of the study. A total of 46 taxa were recorded in the reservoir during the study period, of which 33 were identified to the species level and 6 to the genus level. The number of taxa by years ranged 26 to 32, and species from 17 to 27 (Table 1).
Three groups of zooplankters were detected based on their indicator weights in relation to water temperature: cryophiles included 5 species, thermophiles included only 2 species, and thermobionts included 10 species. The share of cryophiles in the total number (ind./m 3 ) of zooplankters fluctuated within different ranges in some parts of the reservoir. (2006) in the lower part. No statistically significant relationship was found between these parameters. To a large extent, this may be due to a strong effect of the hydrological regime of the Kremenchuk Reservoir, which, especially in the upper part, is characterized by significant inter-seasonal and interannual instability. At the same time, a certain tendency to a gradual increase in the abundance of thermobionts by years in the middle part of the reservoir was observed.
In general, significant fluctuations in the number of indicator species of pollution (3-16 species) were observed in different years and different parts of the reservoir as well as their abundances (0.9•10 3 -130.4•10 3 ind./m 3 ) and saprobity (1.5-1.9), but there is a year-to-year tendency for an increase in the studied indicators in both inter-annual and spatial (from the upper to the lower parts of the reservoir) aspects (Fig. 6).
The largest number of indicator species was recorded for β-mesosaprobes (1-6) and oligo-β-saprobes (1-5), which allows us to identify water of the Kremenchuk Reservoir to the oligo-β-mesosaprobic zone. The least polluted water was in 2006-2012 in the upper part of the reservoir and its quality gradually significantly deteriorated from the upper to the lower part. Water of the middle part of the reservoir was most polluted in 2010, 2011 and 2020, when a very low number (2 species) of α-mesosaprobes was recorded (Fig. 7). The abundance of each of indicator species fluctuated very significantly over the years in different parts of the reservoir. The most abundant were zooplankters belonging to o-, o-β-, and β-saprobes. Abundances in 2020 in the middle part of the reservoir and those of 2012 in the lower part of the reservoir differed most significantly from the others. In 2020, water was quite clean and most abundant in zooplankton samples were o-and β-saprobic organisms, accounting for 42% and 46%, respectively. In 2012, in the lower reaches of the reservoir, 96% of indicator species belonged to o-β-saprobes. On average, water of the Kremenchuk Reservoir can be classified as β-saprobic, which to some extent forms the species composition of zooplankton of the reservoir and its dominant complexes (Fig. 8).
Discussion
Significant inter-annual fluctuations in the abundances of zooplankton of the Kremenchuk Reservoir were observed, but without clear trends. A similar phenomenon was recorded in previous years, in particular during the period of 2001-2004, when zooplankton biomass in this reservoir ranged 0.06 to 1.68 g/m 3 (Kruzhylina, 2005).
Among the investigated individual groups of zooplankters, some significant trends were observed only for cladocerans. The absence of trends in the abundances of copepods and rotifers might be due to the fact that copepods are considered to prefer cold water (Rogozin et al., 2015;Verbitsky et al., 2017) and the water temperature during the study was unfavourable for their development, while rotifers are considered to be eurythermal (Rogozin et al., 2015) and can easily tolerate significant temperature differences. In addition, copepods usually have a lower abundance in the environment in summer compared to spring or autumn due to a combination of predation and diapauses that they enter to avoid mid-summer predation (George, 1973). No significant relationships between warming and rotifer biomass was observed in other studies, e.g. in Lake Võrtsjärv in Estonia (Agasild et al., 2007;Cremona et al., 2020). Inter-annual variations in rotifer biomasses are considered to be affected by rather interspecific and trophic relationships than by temperature (Agasild et al., 2007).
The reasons for changes in the dominant groups of zooplankton in different years may be due to a number of factors of both anthropogenic and natural origin. The most important of these may be factors such as the temperature of the aquatic environment and pollution by discharges from industry and agricultural lands (Frolova et al., 2013;Fetter & Yermolaeva, 2018).
The taxonomic composition of zooplankton was typical for the Kremenchuk Reservoir, having been established there after the end of the period of transformation of lotic environments to lentic habitats. The dominant zooplankton complex was composed of such species as Ch. sphaericus, Bosmina coregoni, B. diversicornis, E. dilatata, which was consistent with the results of previous studies in both Kremenchuk and other Dnipro reservoirs for the beginning of 2000s (Kruzhylina, 2005). Dreissena sp. veligers in zooplankton samples were recorded only in 2020. Given the rather high biomasses of Dreissena sp. in the Kremenchuk Reservoir, which accounted for more than 50% of the total biomass of mollusks (Kruzhylina, 2015), this may be due to the seasonal nature of the dynamics of the abundance of this species, which can be specific for different water bodies. We should note a significant (9.4-30.8 times) increase in the number of rotifers in the samples in 2020, which is consistent with the data of other authors, which showed a significant role of Dreissena sp. veligers in the diet of such rotifers as Asplanchna sp. (Lazareva et al., 2015). Accordingly, the change in the dominant groups from cladocerans to rotifers in 2020 against the background of the abovementioned increase in their abundance and, especially, biomass (19.0-93.3 times) may be associated with a particular year and have no stable trend.
In general, the dynamics of zooplankton taxa composition in the context of processes of ecological succession indicate a transition to a long phase of functioning, in which the structure of zooplankton communities remains relatively stable, although with some changes in absolute and relative quantitative indicators of the development of individual taxa. Production indicators such as abundance and biomass of zooplankton remain at a high enough level for reservoirs of this type (Pashkova, 2014;Trokhymets, 2014).
Thus, the analysis of the state and dynamics of zooplankton communities of the Kremenchuk Reservoir during the studies period does not allow us to consider the water temperature as a factor significantly affectting species composition or their abundances except some individual species (e.g., Ch. sphaericus). Ch. sphaericus is a typical small-sized cladoceran, which is common in eutrophic water bodies with abundant detritus and high cyanobacterial concentrations and some studies showed an increase in its abundance with increasing chlorophyll concentration in some years; however, these relationships were not seen in other years (Vijverberg & Boersma, 1997). The authors suggested that the observed trends were caused by a food effect, and partially by predation pressure. Zooplankton is known to respond to changes in the trophic state of a water body, while the climate warming causes general eutrophication, which in turn can affect zooplankters (Jeppesen et al., 2002;Carvalho et al., 2012). Therefore, the relationship between the abundance of cladocerans and Ch. sphaericus, in particular, and water temperature, which was observed during the study period, was probably due to temperature-related changes in the trophic state of the Kremenchuk Reservoir in different years.
Taking into account that the Kremenchuk Reservoir receives polluted and conditionally pure water of industrial and domestic effluents, a significant part of which containing organic matter (Vyshnevsky, 2011), this segment of external influence needs a separate assessment in terms of changes in zooplankton communities. For this purpose, the level of pollution of the reservoir was assessed using zooplankton indicators of water saprobity. According to ecological criteria, the entire studied water area of the reservoir was characterized by generally unbalanced (or weakened) potential of zooplankton communities for water self-purification. The largest number of indicator species was recorded among zooplankters belonging to β (1-6) and o-β saprobes (1-5). Parameters of water pollution in different years and in different parts of the reservoir differed significantly, which might depend on the presence or absence (in a certain period of time) of polluting discharges. The total saprobity index in different years ranged 1.5 to 1.9 and its changes have a multi-vector nature depending on the hydrological and temperature regimes that may be associated with the intensity of anthropogenic load in some parts of the reservoir (Bukovsky & Kolomeytseva, 2013).
Significant fluctuations in water pollution in different years and in different parts of the reservoir can to some extent be explained by the presence or absence (in a certain period of time) of polluting effluents. Water pollution to some extent affects the species composition of zooplankton in the reservoir. As water pollution increases, such o-saprobic species as B. coregoni, Synchaeta pectinata, E. dilatata, Gastropus hyptopus, Polyarthra sp., Trichocerca stylata can disappear and be replaced by β-and o-β-saprobic species. The largest number of indicator species recorded in the reservoir during the study period belonged to β-and o-β-saprobes. The β-saprobes included Daphnia longispina, Asplanchna priodonta, while the o-β-saprobes included B. longirostris, Ch. sphaericus, Daphnia cucullata, Pleuroxus uncinatus, P. striatus. The α-saprobic species that were found in small numbers in the reservoir included such species as Daphnia pulex and Moina sp.
Despite the fact that there are certain tendencies for an increase or decrease of certain species of zooplankton depending on change in environmental parameters in the reservoir, no other mathematically significant relationships were detected. This is quite clear, because in addition to temperature, the abundance of zooplankters in the reservoir may be significantly affected by a number of other factors such as hydrological regime, weather conditions (wind, sunlight), chemical composition of water, various pollutants (industrial wastewater, agricultural lands, pH (de Eyto & Irvine, 2001;Frolova et al., 2013;Yermolaeva et al., 2016;Fetter & Yermolaeva, 2018). In general, global warming is unlikely to supplant the effects of changing nutrient loading and fish predation, which are considered to be the major drivers of zooplankter dynamics (McKee et al., 2002).
Moreover, significant factors affecting the level of zooplankton development are presence of zooplankton-eating fish in water bodies (Kruzhylina, 2009;Golubkov, 2013;Golubkov et al., 2018), as well as the qualitative and quantitative composition of phytoplankton. For instance, the biomass of non-predatory cladocerans significantly depends on the combination of biomass of cyanobacteria with cell volumes of 50-100 µm 3 and chlorococcal algae with cell volumes of 100-150 µm 3 . The insignificant level of vegetation mainly of small cyanobacteria and chlorococcal algae causes low quantitative and qualitative development of zooplankton in the Kremenchuk Reservoir (Kruzhylina & Didenko, 2007). Therefore, it can be difficult to trace a clear reliable relationship between zooplankton and one environmental factor such as temperature in natural conditions. Therefore, quantitative assessment of the impact of climate change on zooplankton of the Kremenchuk as well as other Dnipro reservoirs is currently impossible, making it necessary to conduct further studies to differentiate the impact of certain external factors on the structural and functional characteristics of zooplankton and assess consequences of these changes on other aquatic organisms including fish.
Conclusion
According to results of the studies of 2006-2020, 26 to 32 taxa were observed in the zooplankton of the Kremenchuk Reservoir, where the dominant groups were cladocerans and rotifers. Zooplankton abundance in the reservoir during the study period ranged 23•10 3 to 256•10 3 ind./m 3 , and biomass 0.14 to 0.89 g/m 3 . A significant positive relationship was observed between the abundance of cladocerans, including some individual species such as Ch. sphaericus and water temperature.
It was not possible to trace clear reliable patterns in relationships between water temperature and zooplankton abundance. This can be explained by the fact that its abundance is also significantly affected by a num-ber of other factors such as hydrological regime, weather conditions (wind, solar radiation), pH of the aquatic environment, which are constantly changing. In addition, significant factors affecting the level of zooplankton development are the presence of zooplankton-eating fish in the water body and the qualitative and quantitative composition of phytoplankton. It is probably too early to assess the impact of the climate change on zooplankton as these changes are still unstable and short in time, but it is necessary to constantly monitor the biota of aquatic ecosystems to further study and summarize the data, which could later allow identifying such changes.
|
v3-fos-license
|
2022-12-29T16:09:51.338Z
|
2022-12-27T00:00:00.000
|
255220542
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/20/1/412/pdf?version=1672126569",
"pdf_hash": "905df5e487e26c7df837964823d354e911211079",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46480",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4027601db39487882df1086ff3a0dc19d3bad379",
"year": 2022
}
|
pes2o/s2orc
|
Patients’ Opinions on the Quality of Services in Hospital Wards in Poland
Introduction: Patient opinion surveys have become a widely used method for assessing key aspects of the functioning of medical facilities and, thus, of the functioning of the entire health care system. They are a prerequisite for developing patient-centered care and an essential component of quality improvement programs. In many countries, including Poland, patient opinion surveys are written into the accreditation standards of medical institutions. Patient’s readiness to recommend a hospital is a recognized indicator of the quality of patient-centered care. In a report on strategies for improving the quality of health care in Europe published in 2019 by WHO and the OECD (Organisation for Economic Cooperation and Development), patient’s readiness to recommend a hospital was cited as one of the basic indicators of ‘patient centeredness’ along with patient satisfaction. Therefore, as well consideration of the quality of medical care, a patient recommendation index was also used in the study presented in this paper. The index was based on the answers to questions about the patient’s readiness to recommend a hospital ward to family and friends. Aim: The aim of the study was to investigate patients’ opinions on the quality of services in particular hospital wards. A patient opinion survey can be used to improve the quality of services and monitor the effects of health-related activities, identify areas that need improvement, motivate medical staff and prevent their burnout, build a trusting relationship with patients, and compare the quality of health care in various facilities. Material and methods: The study was carried out in March 2022. The patient opinion survey was conducted using the CAWI (Computer-Assisted Web Interview). The sample selection was purposive. The respondents were patients with a history of hospitalization. The sample selection used an algorithm for the random selection of patients who met the criteria for the sample. The inclusion criterion was hospitalization in the 12 months prior to the study. A standardized questionnaire was used that was aimed at the assessment of the quality of medical care and the patient’s rights to information. Additionally, the survey contained questions about the demographic characteristics of the respondents. Results: A total of 38% of patients with a history of hospitalization expressed criticisms. The majority of statistically significant differences were observed when differentiating respondents according to age. Elderly persons significantly more often declared having been treated with respect and interest. They also rated more highly the meals served in the hospital, effective pain treatment, and respect for the patient’s dignity and intimacy during diagnosis and treatment. Younger persons assessed all these aspects of hospitalization less favorably. Conclusions: Variables including age and the level of income had a statistically significant influence on the opinion of the respondents. Elderly persons assessed most aspects of the quality of care in a hospital ward more favorably. There were a similar number of “promoters” (36%) and “detractors” (38%) of the quality of hospital services. Detractors mainly pointed to long waiting times for hospital admission, the poor quality of medical and nursing care, and unappealing meals. The promoters emphasized the high quality of medical and nursing care and the favorable conditions of the accommodation. Regular patient satisfaction surveys are helpful in identifying areas in which the functioning of a medical entity requires changes.
Introduction
The quality of medical services is one of the key attributes of the provision of health care. Increasingly effective methods for the management of healthcare facilities are required due to dynamic changes in the provision of medical services and increased patient awareness [1]. Continuous improvement in medical services and their adaptation to patients' needs are the most important issues for health care today [2].
Quality is a very broad concept. With respect to health care, it should be defined not only from the perspective of treatment results, but also with consideration of the conditions in which the treatment process occurs, the atmosphere in which health services are provided to patients and the cost-result relationship. All these factors translate into quality [3]. Quality in health care is essential, not only for the functioning of medical facilities as such, but, most of all, for the health and comfort of patients [4]. According to the World Health Organization, quality involves the result (technical quality), the use of resources (economic efficiency), the organization of services and patient satisfaction [5,6]. Quality in health care is defined not only according to material criteria, but also according to sociological and psychological criteria and concepts. Quality of care means: • application of all the achievements of modern medicine that are necessary to satisfy patients' needs, • health care that is measurable, acceptable, holistic, continuous and documented, • health care that meets the relevant criteria of care, • effective activities that raise the level of health and satisfaction of the population, consistent with the appropriate use of resources for providing care to the community and individuals, • and is a multidimensional and comprehensive concept [7][8][9].
Patient opinion surveys have become an increasingly popular method used to assess key aspects of the functioning of medical facilities and, thus, the entire health care system. It is a prerequisite for developing patient-centered care (patient-centeredness) and an essential component of quality improvement programs. In many countries, including Poland, their use is also written into the accreditation standards of medical facilities [10,11]. Patient's readiness to recommend a hospital is an important indicator of the quality of treatment in patient-centered health care. In a report on strategies to improve the quality of health care in Europe, which was published by WHO and the OECD in 2019, patient's readiness to recommend a hospital was cited as one of the basic indicators of 'patient centeredness' along with patient satisfaction [12]. Therefore, alongside the quality of medical care, a patient recommendation index was also measured in the study presented in this paper. It was based on the answers to questions about patients' readiness to recommend a hospital ward to family and friends. The NPS (net promoter score) methodology was developed in 2003 by Frederik F. Reichheld and described in an article in the Harvard Business Review [13]. Since then, it has become very popular, replacing previously used complicated customer satisfaction questionnaires. Respondents are asked: "Would you recommend our hospital ward to family and friends if they needed similar care or therapy?" [14][15][16]. A follow-up question sought to find out why the patient marked a specific value on the scale, which provided many valuable opinions about the quality of medical care and the management of the medical entity.
Accreditation standards define the manner of providing health care and have a positive impact on the awareness of medical professionals and managers of health care facilities. Quality improvement and patient safety are some of the requirements that must be met by a hospital to be given accreditation. Health care quality improvement means continuous monitoring, analysis and improvement of treatment and management. Quality requires strong leadership, good work organization, cooperation of hospital staff, and effective assessment of quality level. Quality improvement is aimed at reducing risk in a group of patients and caregivers. It involves monitoring and evaluation of quality indicators, on the basis of which specific improvement methods are implemented. The level of improvement is conditioned by the knowledge of the current functioning of a facility, the definition of progress expected and the time necessary to make improvements (quality monitoring). In light of the above, it is necessary to assess the functioning of a facility, i.e., to collect relevant data according to reliable methodologies, to evaluate and analyze data, to identify changes that should be introduced to ensure quality improvement, to implement changes and to undertake further evaluation to determine whether the introduced changes translate into improvement. Through evaluation of patient satisfaction and the quality of medical care, a medical facility can meet accreditation standards and, even more importantly, respond to the changing needs of patients and adapt to the environment in which it operates [10,[17][18][19].
The aim of the study presented in this paper was to examine the impact of patients' opinions on the quality of services in hospital departments on the identification of areas for improvement in the operation of the healthcare entity. The quality of care defined by patients serves to identify specific areas for improvement (the hospital as an organization that learns), to motivate medical staff and prevent their professional burnout, to build a relationship of trust with patients (if the hospital chooses to publish critical comments and inform the public about corrective actions taken), to compare the quality of care in different facilities (in departments with similar characteristics) to offer patients a wider choice, and to motivate facilities to continuously improve the quality of care.
Materials and Methods
The study was carried out in March 2022. The patient opinion survey was conducted using the CAWI. The sample selection was purposive. The respondents were patients with a history of hospitalization. The sample selection was supported using an algorithm for the random selection of patients who met the criteria of the sample. The inclusion criterion was hospitalization in the 12 months prior to the study.
A standardized survey was used that contained questions on the assessment of the quality of medical care, including questions about the respect shown by medical staff to patients, the quality of meals served, hygiene-related issues, such as washing and disinfection of hands by medical staff before approaching the patient, the effectiveness of pain treatment, hospital conditions, and patients' opinions on whether they had achieved the best possible treatment results during hospitalization. The second part of the survey concerned patients' rights to information. It examined whether medical staff listened to the patient, informed the patient about their health in an understandable way, informed them about possible treatment methods and their consequences, and asked the patient for their opinion when choosing treatment methods. In addition, the survey contained questions about patient demographic characteristics, such as age and sex, place of residence (number of inhabitants and voivodship), education, household situation, job position and income level.
The factors affecting the quality of medical care, general assessment of the hospital/hospital ward and the likelihood of recommending it to patient's family and friends were examined. A net promoter score (NPS) was used to check how likely the clients/patients were to recommend a product or service to their family and friends. Depending on their answer to the question on a scale of 0-10, the respondents were divided into three groups: promoters, neutral respondents and detractors. Respondents who gave 9-10 points on the scale were defined as promoters: people who were satisfied and likely to recommend the service; 7-8 points were given by those defined as neutral respondents: patients who were satisfied, but not eager to promote the service; 0-6 points were given by those defined as detractors: dissatisfied patients who did not recommend a service, and who might even discourage others from using the service due to its poor quality. The NPS index is widely used in research on the quality of services, including medical services. Importantly, to obtain a full picture of the quality of a given service, apart from response to questions with a score of 0-10, the respondents were asked a further qualitative open question to justify the number of points given. The respondents were asked about the reason for providing a specific score on the scale of 0-10 when assessing the hospital that they stayed in.
The following aspects of the quality of medical care were analyzed: treatment of patients by medical staff with care and respect, the quality and freshness of meals, hygiene issues, such as washing and disinfection of hands by medical staff before approaching the patient, effectiveness of pain treatment, hospital conditions, including the size of rooms and ensuring privacy and dignity, as well as patients' opinions on achievement of the best possible treatment results during hospitalization. The most statistically significant differences were observed when differentiating respondents based on the net promoter score (NPS). For these data, statistically significant differences were observed for each group: promoters, neutral respondents and detractors.
Treating patients with respect and attention, the quality and freshness of served meals, hygiene issues, such as washing and disinfecting hands by medical staff before approaching the patient, and the effectiveness of pain treatment were assessed on a scale: never, sometimes, usually, always. Hospital conditions, including the size of the rooms and ensuring patients privacy and dignity, and patients' opinions on the achievement of the best possible treatment result during their hospital stay were assessed on a scale: definitely not, rather not, rather yes, definitely yes.
The results show only statistically significant values. The standard confidence level was 95%. The significance level refers to the percentage as well as the sample size, which in this case means significance versus total. Parametric tests were used to verify the hypotheses regarding the values of proportions in the general population or to compare the value of proportions in several populations, based on the knowledge of the value of this proportion in a random sample in a population (or two or more samples).
Results
The study sample consisted of 801 persons, 57% of whom were women and 43% men. The majority of respondents were patients aged 25-34 (28%) and those aged 35-44 (22%). Persons aged 55-64 accounted for 11% of the research sample and people over 65 years of age accounted for 10%. A total of 44% of the respondents had undergone higher education, 59% women and 41% men. A secondary school level of education was reported by 43% of the respondents, of which 55% were women and 45% men. A primary school level of education was reported by 13% of the respondents, 53% of whom were women and 47% men. The majority of respondents were inhabitants of rural areas (21%), and towns with over 500,000 inhabitants (17%). With respect to job position, working persons accounted for 69% and non-working persons accounted for 31% of the study sample. An average net income of up to PLN 3000 was reported by 37% of the respondents and of over PLN 5000 was reported by 20% of the respondents. A total of 12% of the respondents refused to answer the question about salary.
Firstly, we asked about patients' recommendations of a hospital/hospital ward to family and friends (Table 1).
A total of 36% of the sample were promoters, 27% were neutral respondents, and 38% were detractors. Detractors mainly pointed to long waiting times for admission to hospital, poor quality of medical and nursing care and unappealing meals. Some also referred to other important factors that could affect the quality of care, such as shortage of medical staff, and the overly heavy workload of medical personnel. The study was carried out during the pandemic, so some patients also referred to not being able to be visited by family. The promoters highlighted the high quality of medical and nursing care and good conditions of the accommodation.
The patients' opinions on being treated with due interest and care by the medical staff in terms of respecting patient rights and involving the patient in the treatment process are very important (Table 2).
Detractors (NPS scale) declared significantly less frequently (10%) that, during their stay in hospital, medical staff always treated them with due interest and care. The answer 'sometimes' was given significantly more often (53%). Promoters (NPS scale) declared significantly more often (65%) that, during their hospital stay, medical staff treated them with due interest and care. Considering the demographic data, the greatest number of statistically significant differences was observed when differentiating respondents according to age. The highest values of differences concerned the answer 'always' among the oldest patients. Those aged 55-64 and over 65 years reported 48% and 50% more often, respectively, that the hospital medical staff always treated them with due interest and care. The respondents aged 18-24 years reported the same answer ('always') 17% less often. The answers of the respondents were also analyzed with respect to their net monthly income. Statistically significant differences were observed for persons with an income of up to PLN 3000 net (the answer 'sometimes' was indicated 21% less often) and for those with a net income of PLN 3001-5000 (the answer 'sometimes' was indicated 33% more often). We investigated a further important aspect-the quality and conditions of serving meals in hospitals, which have a direct impact on patients' health, recovery and, in the case of inadequate conditions, the spread of infectious diseases in the ward (Table 3).
Detractors (NPS scale) declared significantly more often that they never (33%) and sometimes (37%) were given tasty, fresh and hygienically served meals during their stay in hospital. However, promoters (NPS scale) declared significantly more often that they always (43%) received tasty, fresh and hygienically served meals during their hospital stay. With respect to age, statistically significant differences were observed mainly in in the groups of young respondents (18-24 and 25-34 years of age). A total of 37% of those aged 18-24 years indicated that they were only sometimes given tasty and fresh meals, and only 14% of this group declared that the meals were always of adequate quality and served hygienically. Patients aged 25-34 significantly more often (21%) indicated that their meals were never tasty, fresh or hygienically served. A further important aspect of medical care is hygiene compliance among medical personnel, which affects the epidemiological safety of patients (Table 4). Detractors (NPS scale) significantly more often declared that the hospital staff never (14%), or sometimes (31%) washed or disinfected their hands before approaching the patient. Promoters (NPS scale) declared significantly more often that the staff always (67%) washed or disinfected their hands before approaching them. With respect to age, statistically significant differences were observed in the group aged 55-64 years. Respondents in this age group significantly less often indicated that the medical personnel only sometimes (13%) washed or disinfected their hands.
We asked how often pain treatment in hospital was effective in connection with patients' quality of life and well-being (Table 5). Detractors (NPS scale) significantly more often reported the answer that the pain was never (8%) or sometimes (36%) successfully treated during their hospitalization. Promoters (NPS scale) declared significantly more often that the pain was always (64%) treated effectively during their stay in hospital. With respect to age, statistically significant differences were observed mainly in the groups aged 25-34 years and 55-64 years. Persons aged 25-34 significantly less often indicated that the pain was always (31%) successfully treated. However, patients aged 55-64 years significantly more often indicated that the pain was always (53%) successfully treated during their hospitalization.
We undertook an analysis of patients' opinions on hospital conditions and the accessibility of diagnostics and treatment with respect to patient dignity and intimacy, which is very important in relation to patients' rights (Table 6). Detractors (NPS scale) gave a 'definitely not' answer (19%) and a 'rather not' answer (37%) significantly more often. Promoters (NPS scale) declared 'definitely yes' significantly more often (51%). With respect to age, statistically significant differences were observed mainly in the age group 55-64 years. Persons aged 55-64 significantly more often indicated the answer 'definitely yes' (38%).
Subjective analysis of patients' opinions on achieving the best possible treatment outcome during hospitalization, indicates overall patient satisfaction in terms of valuebased healthcare (Table 7). Detractors (NPS scale) significantly more often gave the answers 'definitely not' (18%) and rather not' (34%). In contrast, the answer 'rather yes' was indicated significantly less often by detractors (42%). Promoters (NPS scale) chose the answer 'definitely yes' significantly more often (52%). With respect to age, statistically significant differences were observed mainly in the age group 18-24 years. Persons aged 18-24 years significantly more often gave the answer 'definitely not' (25%) and significantly less often chose the answer 'definitely yes' (15%). The respondents' answers were also analyzed in terms of achieved monthly net income. Statistically significant differences were observed for persons with an income of more than PLN 5000 net (the answer 'definitely yes' was reported 37% more often).
Discussion
Patients' readiness to recommend a hospital is a recognized indicator of the quality of treatment in patient-centered health care. In a report on strategies to improve the quality of health care in Europe, which was published by WHO and the OECD in 2019, patients' readiness to recommend a hospital was one of the basic indicators of 'patient centeredness', as well as patient satisfaction [12]. Therefore, the study presented in this paper, in addition to assessing the quality of medical care, also used a patient recommendation index, which was based on answers to a question about the readiness to recommend a hospital ward to family and friends. Respondents defined in the first question as detractors negatively assessed particular aspects of the quality of health care much more often. An equally important (though considered by some researchers to be more important) open-ended question was used, enabling respondents to 'fill in the gaps', i.e., providing an opportunity to gain feedback from dissatisfied patients on the reasons for their dissatisfaction, in order to use the information to improve current processes and make necessary changes to improve the quality of changes [20,21]. Many approaches to measuring patient satisfaction are reported in the literature. Due to the use of different measurement methods, the results obtained cannot always be straightforwardly compared. In 2012, the British National Health Service (NHS) introduced an obligation to examine patients' opinions using a method based on a modified version of the NPS, the so-called "Friends and Family Test" (FFT). The question asked in this test is "How likely are you to recommend our ward to friends and family if they needed similar care or treatment?" [14,15,22]. In a comparative study on different methods of measuring patient satisfaction that was conducted in six hospitals in the Netherlands, the authors changed the method of calculating the indicator, and defined detractors as respondents who gave answers 0-5, neutral respondents as those who gave answers 6-7, and promoters as those who gave answers 8-10. This change resulted from the necessity to adjust the interpretation of the scale to the cultural context. In the Netherlands, there is a school grading scale of 1 to 10, with 8 being considered a very good grade and 6 as the pass threshold [16]. By including a follow-up question to explore why the patient marked a specific value on the scale, many valuable patients' opinions were obtained, which concerned not only the quality of medical care, but also the management of the medical entity.
In the study, respondents categorized as detractors reported long waiting times for hospital admission, poor quality of medical and nursing care, and unappealing meals. Some detractors also highlighted extremely important factors that could contribute to the poor quality of care, i.e., shortages of medical staff and the excessive workload of medical staff. However, respondents categorized as promoters reported a high quality of medical and nursing care as well as good conditions in their accommodation. The assessment of the impact of the COVID-19 pandemic on the level of stress among the nursing staff changed in comparison to the pre-pandemic period, which was reflected in an increase in stress symptoms. Aggravating factors reported included the fear of transmitting the virus from the workplace to relatives, the fear of a threat to one's own life and health and that of relatives, rapid organizational changes, and continuous work under an increased sanitary regime. All these factors translated into a heavy workload, high levels of stress and a high risk of burnout among the nursing staff [23]. A study conducted on a group of patients of the Autonomous Public Teaching Hospital No. 4 in Lublin showed high and medium levels of patient satisfaction with nursing care, in terms of the accommodation that was provided and the help given by the nursing staff, the cleanliness and aesthetics of the hospital/ward rooms, necessary assistance during washing or bathing, conditions for rest and sleep, and the provision of assistance in getting up, sitting and walking. Patients' satisfaction with nursing care was low in terms of assistance in airing rooms, meeting physiological needs, free time management and physical exercise and rehabilitation [24]. The attitude of nurses, their respect for the patients' dignity, and assistance in everyday activities have a decisive impact on the level of patients' satisfaction with a stay in a hospital ward. From the patients' perspective, these are the most important aspects for assessing a hospital. The study presented in this paper shows that nursing care was rated relatively highly. Patients aged 55-64 and 65+ years assessed the medical staff's respect for the patient much more highly than those of younger age. Other research conducted among primary care patients has confirmed that interpersonal aspects are very important to patients. Proper communication is especially important for patients when usual visual cues are missing. Communication emerged as one of the highest-rated aspects of care when patient satisfaction with online consultations was assessed. The most highly rated variables were those of empathy and respect for patients; patients appreciated being treated by doctors with care, respect and patience. Patients were also found to report high levels of satisfaction with the comprehensiveness of care, indicating that the primary health care units were able to meet most of the patients' health needs [25]. In a study of patient satisfaction with services provided in inpatient health care in the Kujawsko-Pomorskie Voivodship, the pro-patient approach of the nursing staff was rated very positively or positively by 90% of respondents. The carefulness of performing procedures by nurses was assessed very positively and positively by 90% of respondents. The availability of nursing staff during the day was poorly assessed by only 0.4% of the respondents. The availability of nursing staff at night was rated badly and very badly by 3.5% of respondents [26]. Similar results were obtained in another study on the level of patient satisfaction, where patients aged 65+ rated the quality and availability of nursing care much more highly [27]. Another study on patient satisfaction conducted by Sierpińska and Dziuba showed that 87% of respondents rated nurses' friendliness as very good and the response time to patients' requests as very good [28]. The provision of information on treatment procedures and post-operative management was assessed as very good by 70% of respondents. Hygiene assistance was rated very positively by 76% of the respondents. Similar results were also obtained in another study aimed at the assessment of medical care, in which 53% of patients rated nursing care as very good, 43% as good, and 1% as bad [29]. Another study showed that providing information on the purpose and types of nursing procedures was rated positively by 78% of patients. Opinions on the attitude of nurses and their professionalism were also very positive: 84% of respondents positively assessed the level of professionalism, and over 78% of patients appreciated the nursing staff's friendliness and understanding [30,31]. Statistically significant differences were observed with respect to the quality of meals, mainly in the younger age groups (18-24 and 25-34). Patients aged 18-24 indicated 37% more often the answer that they were only sometimes given tasty and fresh meals, and 14% less often that they were always served meals of adequate quality and hygienic standards. Patients aged 25-34 significantly more often (21%) indicated that their meals had never been tasty, fresh and hygienically served. A study on patient satisfaction with stationary health care services in the Kujawsko-Pomorskie Voivodship indicated a high level of patient dissatisfaction with the meals served. The respondents complained mainly about the temperature of the meals, the variety of dishes and the size of the meals served [26]. Similar results were obtained in another patient satisfaction study, where patients aged 65+ rated the meals much more highly [27].
Another important aspect of the quality of medical services is hospital accommodation, including the size of the rooms and the availability of separate diagnostics rooms that allow for the provision of medical services with respect for the patient's privacy and dignity. In the study presented in this paper, the elderly patients assessed this aspect of health care much better than patients from other age groups. In the study, the patients drew attention, not only to the quality of services, but also to the conditions which they stayed in. The standard of equipment was viewed as an aspect that affected the patient's comfort during hospitalization. The results of the study showed that the vast majority of patients were satisfied with the conditions which they stayed in [32][33][34]. A study on patient satisfaction with stationary health care in the Kujawsko-Pomorskie Voivodship showed that the conditions of hospital rooms, and the respect shown for patient's dignity and privacy were assessed as bad and very bad by 6% of respondents [26].
In sum, providing patients with care that is safe, effective and responsive to patient needs is now recognized as the foremost objective of health systems in all OECD countries. To achieve this, it is necessary to measure the quality of care and to help managers in health entities identify the drivers of high-quality care as the cornerstones of quality improvement. Measuring patients' opinions helps to evaluate aspects such as effectiveness and the achievement of desirable outcomes to ensure correct provision of evidence-based healthcare. Safety, reducing harm caused in the delivery of health care processes and patient-centered practice, placing the patient/user at the center of healthcare delivery, are critical [35].
Limitations of the Study
A limitation of the study is the unrepresentativeness of the sample with respect to age, which does not allow conclusions to be drawn for the entire population. Half of the respondents were persons aged 25-44 years and only 10% of patients were over 65 years of age, which is the group that uses a large percentage of hospital services. Another limitation is that it was a one-off study. To constantly monitor the quality of health care and patient experience, and to introduce procedural improvements and improve the quality of services, continuous and regular research needs to be carried out.
Conclusions
Variables such as age and income had a statistically significant influence on the reported opinions of the respondents. Elderly patients highly valued most aspects of hospital care, such as treating patients with due care and respect, the quality and freshness of meals served in the hospital ward, effective pain treatment and respect for the patient's privacy and dignity.
There was a similar number of promoters (36%) and detractors (38%) of the quality of services. Detractors pointed mainly to the long waiting times for hospital admission to hospital, poor quality of medical and nursing care, and unappealing meals. The promoters reported a high quality of medical and nursing care as well as good accommodation conditions. Regular studies on patient satisfaction with medical services in hospital wards provide information on patients' opinions and, thus, are helpful in identifying areas of the operation of a medical entity that require improvement.
|
v3-fos-license
|
2019-09-09T18:55:29.619Z
|
2019-08-01T00:00:00.000
|
201843751
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.cell.com/article/S2405844019358955/pdf",
"pdf_hash": "dd0fd078dbe20040904c043ae762de7fefe5de99",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46482",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "e3dc776f932e2e0ff02f21be2544de1aad9d66e6",
"year": 2019
}
|
pes2o/s2orc
|
The effect of anxiety on cognition in older adult inpatients with depression: results from a multicenter observational study
Late-life depression is associated with reduced cognitive function beyond normal age-related cognitive deficits. As comorbid anxiety frequently occur in late-life depression, this study aimed to examine the association between anxiety symptoms and cognitive function among older inpatients treated for depression. We hypothesized that there would be an overall additive effect of comorbid anxiety symptoms on dysfunction across cognitive domains. The study included 142 patients treated for late-life depression in hospital, enrolled in the Prognosis of Depression in the Elderly study. Anxiety symptoms were measured at admission using the anxiety subscale of the Hospital Anxiety and Depression Scale. Patients completed cognitive tasks at admission and discharge. Linear mixed and generalized linear mixed models were estimated to investigate the effect of anxiety, on continuous and categorical cognitive scores, respectively, while controlling for depression. Anxiety severity at admission was not associated with performance in any of the cognitive domains. Patients with more symptoms of anxiety at admission demonstrated a significant improvement in immediate recall during the hospital stay. Patients with a score above cutoff indicating clinically significant symptoms on the anxiety subscale performed better on general cognitive function, as measured by the Mini Mental Status Examination at admission, than those below cutoff for anxiety. In conclusion, comorbid anxiety symptoms had no additive effect on cognitive dysfunction in late-life depression in our sample of inpatients.
Introduction
Depression is commonly accompanied by anxiety (L€ owe et al., 2008), especially in older adults (Lenze et al., 2000). Severe anxiety symptoms corresponding to a diagnosis of Generalized Anxiety Disorder (GAD) has been reported in nearly 60% of older inpatients diagnosed with depression (Bendixen and Engedal, 2016). Comorbid anxiety symptoms in late-life depression (LLD) has been associated with more severe depression (Bendixen et al., 2018;Lenze et al., 2000), worse treatment response (Andreescu et al., 2007) and higher suicidality (Bendixen et al., 2018), than LLD without comorbid anxiety.
Knowledge about if and how comorbid anxiety symptoms might affect cognitive functioning in LLD is more limited. LLD itself is associated with reduced cognitive function beyond normal age-related cognitive deficits (Koenig et al., 2014;Morimoto and Alexopoulos, 2013), and may be an independent risk factor or a prodromal stage of dementia (Diniz et al., 2013). The association between anxiety disorders and dementia is still uncertain (de Bruijn et al., 2014;Gulpers et al., 2016). Compared to depression without co-occurring anxiety, comorbid anxiety disorders in LLD might be associated with a more severe decline in some cognitive domains relative to others. In one study, older adults with depression and a comorbid anxiety disorder had a greater decline in memory during a 4-year period compared to depressed older adults without a comorbid anxiety disorder. Other cognitive domains were however not affected (DeLuca et al., 2005). To our knowledge, only one research group has looked specifically at how anxiety symptoms in LLD influence cognitive function. Bendixen and colleagues studied older inpatients with depression, but found no relationship between anxiety symptoms and impairment in general cognitive function as measured by the Mini Mental Status Examination (MMSE) and the Clock Drawing Test at admission to hospital (Bendixen et al., 2018). The group did however not include other measures of cognition.
According to attentional control theory (Eysenck and Derakshan, 2011;Eysenck et al., 2007), anxiety leads to enhanced focus on threat information and leaves fewer resources available for task relevant stimuli. Consequently, anxiety can result in domain specific dysfunction, such as problems with tasks involving inhibition of irrelevant information and attention switching. In line with attentional control theory, studies with healthy, community-dwelling older adults have indicated that subclinical symptoms of anxiety are related to poorer performance in specific cognitive domains. Impairments have been reported particularly in relation to executive functions, such as processing speed/shifting attention and inhibition (Beaudreau and O'hara, 2009;Yochim et al., 2013), but also in episodic memory (Stillman et al., 2012;Yochim et al., 2013). Similarly, in younger adults, major depressive disorder (MDD) with a comorbid anxiety disorder has been linked to greater executive dysfunction and psychomotor slowing compared to MDD alone (Basso et al., 2007), particularly in switching attention and inhibition functioning (Lyche et al., 2011).
More research is needed to clarify the potentially complex relationship between anxiety symptoms, depression, and cognitive function in late life. LLD is a heterogeneous disorder in which some individuals are assumed to experience a reduction in cognitive abilities over time, so it is important to clarify whether anxiety symptoms are a contributing factor to this decline. The overall aim of the present study was therefore to investigate the impact of comorbid anxiety symptom severity on cognitive function across several domains in patients admitted for in-hospital treatment for LLD. A broad set of cognitive tasks were included to measure performance in several cognitive domains, such as general cognitive function measured by the MMSE, different components of episodic memory, word fluency, and measures of executive functions including processing speed and attention switching. Anxiety was measured by the anxiety subscale of the Hospital Anxiety and Depression Scale (HADS-A) at admission to hospital. The objectives of the current study were (1) to assess the impact of anxiety symptom severity on change in performance across the cognitive domains during the hospital stay; and (2) to analyze anxiety symptom severity and how it affects cognitive performance at admission and at discharge from hospital. Finally using a cutoff on the HADS-A of !8 in exploratory analyses (3), we compare performance on the cognitive tasks between patients above and below the cutoff for clinically significant anxiety symptoms, both a) in relation to change in cognitive performance between admission and discharge, and b) cognitive performance at admission and at discharge.
We hypothesized that there would be an overall additive effect of comorbid anxiety symptoms in LLD on dysfunction across the cognitive domains. Although the literature on comorbid anxiety symptoms in depression and cognitive function is scarce, we reasoned that specific domains including executive functioning and episodic memory would be more affected by co-occurring anxiety than other cognitive domains. To our knowledge, this is the first study to examine coexisting anxiety symptoms and their associations with functioning across several cognitive domains during hospital treatment for LLD.
Design
We used data from the Prognosis of Depression in the Elderly (PRODE) sample. PRODE is a Norwegian multicenter, observational, prospective study of older inpatients treated for depression in nine departments of old-age psychiatry, previously described in Borza et al. (2015).
Patients
Persons were eligible for inclusion in the PRODE study if they were 60 years or older referred to specialist health care service for treatment of depression, not successfully treated in primary health care. For detailed information see Borza et al. (2015). Patients with dementia who had severe aphasia and patients with life-threatening diseases were not included. The participating patients and caregivers were given oral and written information about the study, and they subsequently gave written consent to participate. For patients without the capacity to give written consent, their next of kin gave consent on behalf of the patient. The study was approved by the Regional Committee of Medical Research Ethics and the Privacy and Data Protection Officer at Oslo University Hospital.
A total of 169 patients from nine centers were included in the PRODE sample between December 2009 and January 2013. Previous analyses showed no difference in age and sex between those who agreed and those who refused to participate (Borza et al., 2015). Nine patients were excluded because they were outpatients, fourteen patients were excluded for having been diagnosed with dementia during the hospital stay, and four patients were excluded because of missing data on anxiety level at admission. The current study ultimately included data for 142 older adult inpatients with depression.
Anxiety and depression scales
Anxiety and depression symptoms at admission and discharge from hospital were measured using the Norwegian version of the HADS (Zigmond and Snaith, 1983). The scale consists of 14 items, where seven items assess anxiety symptoms (HADS-A, e.g. "I feel tense or wound up"), and seven items address depressions symptoms (HADS-D, e.g. "I have lost interest in my appearance"). The items are rated on a 4-point scale ranging from 0 to 3. Higher score on HADS-A and HADS-D indicates more severe anxiety and depression, respectively. The scale has been validated in the Norwegian language, and the internal consistency reliability score (Cronbachs's alpha) has been found to vary between 0.77 and 0.88 for HADS-A, and between 0.70 and 0.88 for the HADS-D (Leiknes et al., 2016). The scale has proved to be a reliable and valid screening tool of severity and caseness of anxiety and depression in a variety of different samples (Helvik et al., 2011). To identify patients with clinical significant anxiety symptoms, we used the most common cutoff (!8) on the anxiety subscale of the Hospital Anxiety and Depression Scale (HADS-A) and divided patients into groups indicating anxiety versus no anxiety (Bjelland et al., 2002).
Cognitive measures
Cognitive assessment was done at admission and discharge from hospital. General cognitive function was measured using the Norwegian revised version of the Mini Mental Status Examination (MMSE-NR) (Folstein et al., 1975;Strobel and Engedal, 2008). The scale includes 20 simple questions and tasks that measures orientation, memory, arithmetic skills, language and basic motor abilities. Scores range from 0 to 30 and a higher score indicates better overall cognitive function. The scale has acceptable test-retest reliability (!0.7) (Strobel and Engedal, 2008).
Word fluency was measured by two subtests of the Controlled Oral Word Association Test (COWAT). Letter fluency is measured as the total number of words the patient is able to produce starting with the letters F, A and S within a time limit of 1 min for each letter. Similarly, category fluency is measured as the total number of items named for the two categories "animal" and "clothing" (Benton, 1967). Acceptable test-retest reliability (0.74) has been proven for letter fluency (Ruff et al., 1996).
Episodic memory was measured by three subtasks of the Ten Word Test (Consortium to Establish a Registry for Alzheimer's Disease, CERAD) L.J. Martinussen et al.
Heliyon 5 (2019) e02235 (Morris et al., 1988). The test consists of ten words presented and learned across three trials. Immediate recall is measured as the number of words the subject is able to recall across the three trials, with a total possible score of 30. Delayed recall is measured as the number of words the subject is able to reproduce after a delay of 10 min. Subjects are then given a list of ten novel words mixed with the ten words from the original list. Recognition is measured as the total number of correct positive and negative responses of whether each word was part of the original list or not, with a total possible score of 20. Test-retest reliability scores for the three subtasks are shown to range between 0.5-0.8 (Welsh-Bohmer and Mohs, 1997).
Processing speed and attention switching (executive function) were measured using two subtests of the Trail Making Test (TMT), TMT-A and TMT-B (Reitan, 1958). In TMT-A patients are instructed to sequentially connect numbered dots as fast as possible. Time to complete the task is used as a measure of processing speed. In TMT-B the subject needs to alternate between number and letters and connect the dots in numerical and alphabetical order as fast as possible. Time to complete the task is used as a measure of processing speed and attention switching. Results on the TMT were scored according to existing age-adjusted norms derived from Ivnik et al. (1996). Test-retest reliability is proven to be acceptable for TMT-A and good for TMT-B (0.75 and 0.85, respectively) (Giovagnoli et al., 1996).
Demographic and clinical characteristics
Information on demographic characteristics and psychiatric history, including previous depressive episodes and age of onset of the first lifetime depressive episode, was obtained from case notes and structured interviews with patients and caregivers at admission. Diagnoses were established during hospital stay according to ICD-10 criteria (World Health Organization (WHO), 1993). Medications were classified according to the Anatomical and Therapeutic Chemical classification system. Use of psychotropic medications at admission and discharge was defined as the number of antidepressants, anxiolytics, hypnotics, antipsychotics, antidementia drugs, lithium, antiepileptics, and antiparkinsonian drugs patients were using at admission and discharge from hospital. Physical health was measured by the General Medical Health Rating (GMHR), a one-item scale with four categories (excellent, good, fair, and poor). GMHR was dichotomized into good (excellent/good) and poor (fair/poor) health status. High interrater reliability is reported for GMHR (weighted kappa ¼ 0.91) (Lyketsos et al., 1999). Marital status was dichotomized into single (including singles, divorced or separated patients, and widows/widowers) and not single (married or living together with a partner).
Procedure
Standardized measures were administered by health professionals at admission and at discharge. Health professionals working in the involved departments received training in the standardized administration procedure before the start of the study and twice a year during the study period. Evaluation of eligibility of patients and inclusion in the study was done as soon as possible after admission to hospital by the trained health professionals. The mean number of days from admission to inclusion was 5.6 days (standard deviation (SD) ¼ 6.0).
There was no treatment protocol; treatment varied across patients and study centers and included a range of different approaches. All patients received multidisciplinary treatment, combining medications and various therapeutic approaches. Among the patients, 90.1% received psychotropic medications at hospital admission and 94.4% received psychotropic medications at discharge. A total of 39 patients (27.5%) received Electroconvulsive therapy during the hospital stay, with an average of 12.7 (SD ¼ 6.0) treatments. Discharge from hospital was based on the clinical procedure at each department, and the discharge assessment was done as close as possible to the discharge date.
Statistical analysis
All statistical analyses were conducted using the Statistical Program for Social Science Package (SPSS v. 25.0) and Statistical Analysis System (SAS v. 9.4). Imputation for MMSE-NR and HADS was performed for cases with 50% or fewer missing values on the scale. The empirical distribution for each item on the scale was determined, and a random number drawn from that distribution was used to replace the missing value. In the current sample, three values at admission and one at discharge for items on the MMSE-NR scale and one value at admission for an item on the HADS-A scale were imputed.
Patient characteristics were presented as means and SDs or frequencies and percentages, as appropriate. The HADS-A admission score was used in primary analyses. For exploratory analyses, the HADS-A admission score was dichotomized into two groups with a cutoff score of !8 for caseness of anxiety. Patients in different anxiety groups were compared using independent samples t-test and χ 2 -test.
Because patients were included from different centers, data could exhibit a hierarchical structure, while repeated measurements for patients imply within-patient correlations. To correctly adjust all estimates for within-patient and within-center correlations, random effects for patients nested within the centers were entered in all proceeding models. Center-level was eliminated if negligible or not present.
Six linear mixed models, one for each cognitive test measured as a continuous variable, were estimated using the SAS MIXED procedure. Time between admission and discharge, the HADS-A admission score, and the interaction between HADS-A and time were entered as fixed effects. For categorical tests, TMT-A, and TMT-B, generalized linear mixed models with the same fixed effects were estimated (SAS GLIMMIX procedure). A significant interaction term would imply that there are overall differences in association between HADS-A admission score and cognitive test at admission and discharge. In post hoc analysis, the models were explored further and the associations at each time point and differences between time points for varying HADS-A values were quantified. HADS-A score was substituted with a dichotomized HADS-A in exploratory analyses.
All regression models were adjusted for depression severity at admission (HADS-D), previous depressive episodes, and number of psychotropic medications across admission and discharge in addition to sex, age, and education. Because TMT-A and TMT-B were scored according to age-based norms, only adjustment for depression severity at admission, sex and education was performed. The cognitive test scores are highly correlated, so we implemented no adjustment for multiple testing. Pvalues are reported as they are in all models, and significance level was set to the conventional 5% in all analyses.
Descriptive findings
The mean HADS-A score at admission was 11.4 (SD ¼ 4.7) and decreased significantly to 6.5 (SD ¼ 4.5) at discharge (p < 0.001), while the mean HADS-D score decreased significantly from 11.9 (SD ¼ 4.8) at admission to 6.4 (SD ¼ 4.5) at discharge (p < 0.001). Demographic and clinical characteristics at admission are shown in Table 1.
Demographic and clinical characteristics across dichotomized HADS-A groups based on cutoff score (anxiety versus no anxiety) are given in Table 2. Patients above cutoff on HADS-A scored significantly higher on HADS-A at discharge, HADS-D at admission, stayed longer in the hospital, and used more psychotropic medications at admission than patients below the cutoff. Age, sex, education, marital status, psychotropic medications at discharge, and GMHR did not differ across HADS-A groups. There was no difference in distribution of diagnoses (recurrent depression, bipolar disorder, depression with psychosis, personality disorder) or age of onset of first depression episode, duration of depressive episode or occurrence of previous depressive episodes across HADS-A groups. Table 3 displays the raw mean scores and SDs for each cognitive test at admission and discharge.
Immediate recall
Overall, there were differences between time points regarding the association between HADS-A score and the immediate recall task (p ¼ 0.037) with significantly higher immediate recall scores at discharge compared to admission for increasing HADS-A, but only for HADS-A values above 4 (Table 4). Exploratory analyses with the dichotomized HADS-A showed that there was no significant change in the immediate recall task score from admission to discharge in neither group, however overall the change was significantly different between the groups (p ¼ 0.030 for interaction) (Table 4), where those with a score above cutoff for anxiety on HADS-A recalled significantly more words at discharge than at admission (p < 0.001); with no difference among those below cutoff.
General cognitive function
No association between HADS-A score and performance on MMSE was found. According to exploratory analyses with the dichotomized HADS-A, there was no overall difference in change in MMSE between those with anxiety below and above the cutoff. However, those with a score above cutoff for anxiety on HADS-A scored higher on MMSE at admission (p ¼ 0.030) compared to those below cutoff.
Delayed recall and word fluency
No association was found between HADS-A score and performance on the delayed recall task, or performance on the word category task. Patients overall did however remember more words at discharge compared to admission (p ¼ 0.001) in the delayed recall task, and produced more words at discharge compared to admission in the word fluency tasks (p ¼ 0.037) ( Table 4). The same finding was present in exploratory analyses with dichotomized HADS-A as explanatory variable (Table 4).
Recognition, category fluency, and processing speed/attention switching (executive function)
No association between continuous or dichotomized HADS-A score and performance on the recognition task, category fluency task (Table 4), TMT-A, or TMT-B (Table 5) was found. Neither were there any significant interactions present.
Discussion
This study examined the relationship between anxiety symptoms at admission of hospitalization and change in cognitive function across treatment of LLD, and between anxiety symptoms at admission and cognitive function at hospital admission and discharge. To our knowledge, no studies have looked at coexisting anxiety symptoms measured by HADS-A and their associations with cognitive function in several cognitive domains in older persons with clinical depression. Higher level of comorbid anxiety symptoms at admission was not associated with reduced cognitive function in any of the cognitive domains in patients treated for depression in this study. The findings are therefore in line with the literature suggesting that anxiety does not lead to an increased risk of cognitive dysfunction (de Bruijn et al., 2014). Based on previous findings, we reasoned that episodic memory would be particularly negatively affected by comorbid anxiety symptoms. Although patients with more pronounced anxiety symptoms at admission scored significantly higher on the immediate recall task at discharge compared to admission, there was no association between anxiety symptom level and the immediate recall task itself. Similarly, there was no association between anxiety symptom level and performance on the delayed recall and 2005), where having a comorbid anxiety disorder with depression was associated with an accelerated memory decline relative to only having a diagnosis of depression. Our findings suggest that symptoms of anxiety that occur together with a diagnosis of depression do not lead to a greater reduction in memory during hospital stay. There was no association between anxiety symptom severity or anxiety groups and cognitive function neither at admission nor at discharge, except for performance on the MMSE at admission, where it was found that those above cutoff for anxiety scored higher compared to those without anxiety. Anxiety has been proposed in some circumstances to be beneficial for cognitive performance. In a series of studies by Bierman and colleagues (Bierman et al., 2005;Bierman et al., 2008), mild anxiety in community-dwelling older people, as measured by the HADS-A, was related to better performance, while severe anxiety was negatively associated with performance. Others have also posited that state anxiety does not need to be detrimental but rather could be favorable for cognition when controlling for confounders (Potvin et al., 2013). As there was no difference in general cognitive function in our sample at discharge from hospital, our findings are also in line with the study of Bendixen et al. (2019), where it was found that initial anxiety among older adults in specialist mental health services did not predict future decline in general cognitive function as measured by MMSE. Throughout hospital stay there was a significant improvement in number of words remembered on the delayed recall task, and in number of words produced in the word fluency task. Initial problems in cognitive function related to depression and/or anxiety at admission were most likely present among patients, and improvements might have been caused by treatment and thus reductions in psychopathological severity. Patients with depression improve in cognitive function during antidepressant treatment (Butters et al., 2000;Yoo et al., 2015), although they are still more impaired than older persons without psychiatric illness after treatment (Butters et al., 2000). Alternatively, some of the improvement might have resulted from practice effects. For instance, older men with and without impairments in delayed memory function at baseline showed practice effects over time, while the beneficial effects of practice disappeared after five years for those with impairments at baseline (Mathews et al., 2013). A substantial number of the patients in the current sample experienced a significant amount of anxiety at admission to hospital. Nearly 80% of the patients scored above the cutoff for clinical significant anxiety symptoms. Although comorbid anxiety symptoms were not associated with additional cognitive problems in our sample of patients in treatment for late-life depression, we found that patients above the cutoff for anxiety at admission also seemed to need longer treatment time, used more medications and had higher anxiety at discharge than patients below the cutoff score. The findings indicate that patients with late-life depression and comorbid anxiety symptoms have more severe illness than those without anxiety, consistent with studies that have linked comorbid anxiety to worse treatment response (Andreescu et al., 2007), and more severe depression (Bendixen et al., 2018;Lenze et al., 2000). Thus, it is important to target and treat anxiety in patients with late-life depression.
Limitations and strengths
Cognitive test scores are correlated, and the Bonferroni correction is overly conservative in such cases, and lowers the chance of detecting real differences (Type 2 error). As we hope that our findings encourage future studies, replication and further exploration of the association between comorbid anxiety symptoms in late-life depression and cognition, pvalues were reported without adjustment for multiple testing. The results should therefore be interpreted with caution. Our main aim was to study the effect of comorbid anxiety symptoms on cognitive function among patients with depression, and a control group was not considered necessary. Based on previous research on depression, patients were most likely cognitively impaired compared to the healthy population (Koenig et al., 2014;Morimoto and Alexopoulos, 2013). As the study did not include any control group, it is not possible to compare direct effects of depression on cognitive function, and we are only able to make assumptions based on the established literature. The study's strengths were the use of well-established and validated assessment scales, inclusion of several cognitive tasks, a representative sample of the clinical population, and robust statistical methods. Because few exclusion criteria were used and because of the observational and prospective design, the sample is representative of everyday clinical practice in psychiatric specialist health care for older adults in Norway.
Future directions
Although we did not find any association between anxiety symptoms and cognitive dysfunction in our sample of inpatients treated for LLD, it might be that comorbid anxiety influence cognition over a longer timeperiod. It has been suggested that anxiety has a moderate effect over short time periods, which increases when followed up over a longer period (Petkus et al., 2017). Previous findings show that anxiety is tied to greater memory decline over 4 years (DeLuca et al., 2005) and is associated with a genetic risk for dementia (Petkus et al., 2017). Others, however, have found that neither anxiety disorders nor anxiety symptoms as measured by HADS-A were associated with increased risk for developing dementia (de Bruijn et al., 2014). Future studies should therefore investigate anxiety symptoms in depressed patients over a longer time period after treatment. Moreover, our results confirm the findings of Bendixen and Engedal (2016), where anxiety symptoms seem to be common among patients with LLD. As these symptoms occur to be persistent (Bendixen et al., 2019), future work should investigate whether specifically treating anxiety symptoms in depressed patients lead to better treatment outcomes.
Conclusion
There was no additive effect of comorbid anxiety symptoms on cognitive dysfunction in late-life depression in our sample of inpatients.
Author contribution statement
Liva Jenny Martinussen, Ina Selseth Almdahl, Maria Stylianou Korsnes: Analyzed and interpreted the data; Wrote the paper. J urat _ e Saltyt _ e Benth: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Tom Borza, Geir Selbaek: Conceived and designed the experiments; Performed the experiments; Wrote the paper.
Bodil Mcpherson: Performed the experiments; Wrote the paper.
Funding statement
The study reported in this article was supported by the Old Age Psychiatry Research Group, Oslo University Hospital, Norway. The original study was supported by unrestricted grants from the South-Eastern Norway Regional Health Authority (grant number: 2010088) and Innlandet Hospital Trust, Norway (grant number: 150201). These institutions had no further role in study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the paper for publication.
|
v3-fos-license
|
2020-12-03T09:06:22.823Z
|
2020-12-02T00:00:00.000
|
229425896
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://tc.copernicus.org/articles/14/4341/2020/tc-14-4341-2020.pdf",
"pdf_hash": "8616ebe288bfe4439531e1fbf3e7726755fd6dd9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46484",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"sha1": "6c28cbdc548e738c58df7f0a046745ab1261743d",
"year": 2020
}
|
pes2o/s2orc
|
Ground ice, organic carbon and soluble cations in tundra permafrost soils and sediments near a Laurentide ice divide in the Slave Geological Province, Northwest Territories, Canada
The central Slave Geological Province is situated 450–650 km from the presumed spreading centre of the Keewatin Dome of the Laurentide Ice Sheet, and it differs from the western Canadian Arctic, where recent thawinduced landscape changes in Laurentide ice-marginal environments are already abundant. Although much of the terrain in the central Slave Geological Province is mapped as predominantly bedrock and ice-poor, glacial deposits of varying thickness occupy significant portions of the landscape in some areas, creating a mosaic of permafrost conditions. Limited evidence of ice-rich ground, a key determinant of thaw-induced landscape change, exists. Carbon and soluble cation contents in permafrost are largely unknown in the area. Twenty-four boreholes with depths up to 10 m were drilled in tundra north of Lac de Gras to address these regional gaps in knowledge and to better inform projections and generalizations at a coarser scale. Excess-ice contents of 20 %–60 %, likely remnant Laurentide basal ice, are found in upland till, suggesting that thaw subsidence of metres to more than 10 m is possible if permafrost were to thaw completely. Beneath organic terrain and in fluvially reworked sediment, aggradational ice is found. The variability in abundance of ground ice poses long-term challenges for engineering, and it makes the area susceptible to thaw-induced landscape change and mobilization of sediment, solutes and carbon several metres deep. The nature and spatial patterns of landscape changes, however, are expected to differ from ice-marginal landscapes of western Arctic Canada, for example, based on greater spatial and stratigraphic heterogeneity. Mean organic-carbon densities in the top 3 m of soil profiles near Lac de Gras are about half of those reported in circumpolar statistics; deeper deposits have densities ranging from 1.3–10.1 kg C m−3, representing a significant additional carbon pool. The concentration of total soluble cations in mineral soils is lower than at previously studied locations in the western Canadian Arctic. This study can inform permafrost investigations in other parts of the Slave Geological Province, and its data can support scenario simulations of future trajectories of permafrost thaw. Preserved Laurentide basal ice can support new ways of studying processes and phenomena at the base of an ice sheet.
Introduction
A unique drilling program sampling permafrost in the tundra north of Lac de Gras resulted in 24 boreholes with depths up to 10 m. It sampled the active layer and permafrost layer of soils and sediments and allowed investigating their contents of ground ice, organic carbon and soluble cations. These three interrelated topics (e.g., Littlefair et al., 2017;Lacelle et al., 2019) are relevant for understanding the nature of permafrost and for anticipating consequences of its thaw, which are expected to become increasingly persistent and widespread due to anthropogenic global climate change.
The Lac de Gras region (Fig. 1), as part of the Slave Geological Province, is of interest because its geomorphic setting and Quaternary history differ from more intensively studied areas in the previously glaciated western Canadian Arctic and in unglaciated terrain in Yukon and Alaska (Dredge et al., 1999;Karunaratne, 2011). Its Holocene periglacial evolution spans only about 9000 years; it is situated relatively close to an ice divide of the Laurentide Ice Sheet. Even though ice divides shift over time (Margold et al., 2018;Boulton and Clark, 1990b), predominant zones of erosion and deposition by the Laurentide Ice Sheet, and likely previous ice sheets, are apparent at the continental scale and have been linked with continental patterns of ice flow and basal thermal regime (Sugden, 1977(Sugden, , 1978Boulton and Clark, 1990a). The area beneath the Keewatin Dome spreading centre (Fig. 1c, zone 1), the predominant location of its ice divides, was characterized by low subglacial erosion rates and often has thick till. Areas near the margin of the ice sheet are frequently characterized by high deposition rates, and many of these environments with thick till are relatively well accessible and studied in the western Canadian Arctic (Fig. 1c, zone 3). Between both (Fig. 1, zone 2) there is evidence for an area of increasing glacial erosion and basal conditions transitioning from melting to refreezing to fully frozen. The Slave Geological Province largely falls into this intermediate zone that is characterized by predominantly thin glacial sediments, and mineral soils are often coarse and locally sourced from igneous and metamorphic rocks. The conditions in this zone likely result in high spatial and stratigraphic heterogeneity in the landscape, creating the need for detailed study of permafrost conditions and careful scaling approaches for coarsescale models.
Several mines in the area as well as the planned Slave Geological Province Corridor (all-season road, power transmission, communication) add applied relevance in the long term. This billion-dollar infrastructure project will connect Yellowknife with mines and future mineral resources in the study area and may eventually connect Canada's highway system to a deep-water port on the Arctic Ocean in Nunavut. The study presented here has been enabled by the Slave Province Surficial Materials and Permafrost Study, a large partnership of industry, government and academia.
The ice content of permafrost strongly determines the consequences of thaw such as subsidence or thermokarst development. It thereby also controls potential damage to infrastructure as well as the amount and timing of carbon fluxes into the atmosphere (Turetsky et al., 2019) and nutrient release into terrestrial and aquatic ecosystems Kokelj et al., 2013). The surroundings of Lac de Gras are shown as continuous permafrost with low (0 %-10%) visible-ice content in the upper 10-20 m and sparse ice wedges in the Permafrost Map of Canada (Heginbottom et al., 1995) and are designated as having thin overburden cover (< 5-10 m) and exposed bedrock in the Circum-Arctic Map of Permafrost and Ground-Ice Conditions (Brown et al., 1997). For both, it is the lowest class of ground-ice content in continuous permafrost. The new ground ice maps for Canada (O'Neill et al., 2019) show the study area (50 km × 50 km) to contain no or negligible wedge ice, negligible to low segregated ice and no relict ice, which includes buried glacier ice. By contrast, the hummocky tills that cover about 20 % of the study area have been hypothesized to contain large ice bodies, possibly of glacigenic origin (Dredge et al., 1999) as proposed also in other areas (e.g., Dyke and Savelle, 2000). Improving our understanding of the vertical distribution, spatial heterogeneity and characteristics of ground-ice are a key prerequisite for better simulating and anticipating the consequences of permafrost thaw.
Large stocks of organic carbon that can be decomposed and transferred to the atmosphere upon thaw (Schuur et al., 2008) are held in permafrost (Hugelius et al., 2014). The integration of organic carbon into the near-surface permafrost is related to either periods of deeper thaw, which can redistribute carbon within the soil profile, or to a rising permafrost table due to colluviation or alluviation, peat accumulation, or climate cooling. These processes affect both carbon and geochemical profiles (Lacelle et al., 2019). To support the generation of future climate scenarios, the quantification and characterization of permafrost organic-carbon storage are important, and little information on soil organic carbon exists within a large area surrounding Lac de Gras, especially at depths exceeding 1 m (Hugelius et al., 2014;Tarnocai et al., 2009).
Nutrients, organic materials and contaminants (natural and anthropogenic) can be released from permafrost during thaw (Dyke, 2001;Leibman and Streletskaya, 1997;Mackay, 1995), translating geomorphic disturbance, forest or tundra fire, or atmospheric warming into impacts on the chemistry of soils and surface water, as well as provoking noticeable ecological and downstream effects (e.g., Frey and McClelland, 2009;Kokelj and Burn, 2005;Kokelj et al., 2009;Littlefair et al., 2017;Malone et al., 2013;Tank et al., 2016). Studies from northwestern Canada report permafrost, the transient layer and the active layer to have distinct physical and geochemical characteristics (Kokelj et al., 2002;Burn, 2003, 2005;Lacelle et al., 2014) and sometimes distinguish relict/paleo-active layers (Burn, 1997;Lacelle et al., 2019). These vertical patterns are attributed to (a) past thawing causing loss of ground ice, leaching of solutes from thawed soils, and redistribution of organic carbon by cryoturbation (Kokelj and Lewkowicz, 1999;Kokelj et al., 2002;Leibman and Streletskaya, 1997;Pewe and Sellmann, 1973) and (b) thermally induced moisture migration during soil freezing redistributing water and soluble ions (Cary and Mayland, 1972;Qiu et al., 1988) contributing to solute enrichment in near-surface permafrost (Kokelj and Burn, 2005). These study areas, however, are different from the Slave Geological Province. For example, the alluvial materials derived from sedimentary and carbonate rock of the Taiga Plains together with regular flooding produce solute-rich active layer and permafrost deposits in the Mackenzie Delta. As another example, the sediments that comprise Herschel Island are silty-clay tills that include coastal and marine deposits excavated by the Laurentide Ice Sheet (Burn, 2017). In contrast to previous findings in these areas, we hypothesize that the tills in the Lac de Gras region are solute-poor because they are locally sourced from granitic bedrock (Hu et al., 2003) and had limited potential for chemical weathering at depth. This is in line with sediments of similar origins, but contrasting depositional and permafrost history near Yellowknife reported to have low soil solute concentrations (Gaanderse et al., 2018) with variable trends in vertical profiles suggestive of active-layer leaching and signs of evaporative concentration depending on site history (Paul et al., 2020).
This study aims to improve the understanding and quantitative characterization of permafrost-and active-layer materials in tundra environments near Lac de Gras and contribute to better understanding permafrost environments in the intermediate zone between the margins and the Keewatin Dome of the Laurentide Ice Sheet in the Slave Geological Province more broadly. The objectives are to (i) quantify the amounts and vertical patterns of excess ice, organic carbon and total soluble cations; (ii) explore factors contributing to the variation in physical and chemical characteristics between terrain types; and (iii) compare excess-ice content, organic-carbon density and total soluble cation concentration with other permafrost environments or with compilations such as overview maps and databases. We interpret multiple boreholes grouped by terrain type and, with the data available, distinguish unfrozen (active layer) and frozen (predominantly permafrost) samples but do not additionally separate transient or relict active layers.
Study region
The study region (110.3 • W, 64.7 • N) is north of Lac de Gras, approximately 200 km south of the Arctic Circle and about 310 km northeast of Yellowknife. The regional climate is continental, with summers cool and short and winters cold and extremely long (Hu et al., 2003). Ekati, a diamond mine in the study region, has a mean annual and summer air temperature of −8.9 and 14.0 • C, respectively, and an annual precipitation sum of 275 mm during 1988-2008(Environment Canada, 2019. Deglaciation occurred before 8500 BP and between 6000 and 3000 BP, forest tundra extended to approximately the study area and then retreated again (Dyke, 2005;Dredge et al., 1999).
The region is in the zone of continuous permafrost ( Fig. 1) and mapped as having low (0 %-10 %) visible-ice content in the upper 10-20 m (Heginbottom et al., 1995;Brown et al., 1997). One map indicates sparse ice wedges and the other thin overburden (< 5-10 m) with exposed bedrock. A recent circumpolar compilation of permafrost carbon data (Hugelius et al., 2014) estimated soil organic-carbon storage (SOCs) to be 5-15 (0-1 m) and 15-30 kg C m −2 (0-3 m). Recent work in the area has produced a wealth of permafrost stratigraphic (Gruber et al., 2018a) and thermal (Gruber et al., 2018b) data that enabled not only this contribution but also several simulation studies (Cao et al., 2019a;Melton et al., 2019;Cao et al., 2019b) predicting permafrost temperature driven by global atmospheric models.
For spatial context, we consider a 50 km × 50 km study area and, additionally, its surroundings as characterized by the 1 : 125 000 National Topographic System (NTS) of Canada maps on surficial geology "Lac de Gras" (76-D, Geological Survey of Canada, 2014b) and "Aylmer Lake" (76-C, Geological Survey of Canada, 2014a). These map areas are located 450-650 km from the presumed mean spreading centre of the Keewatin Dome and about 100-300 km from the transition of thick to thin glacial sediments that is apparent on coarse-scale maps. It generally is a source area for sediments, unlike ice marginal locations. The spatial abundance of surface materials and previously predicted relict ice content is summarized in Table 1 (see also Fig. S1 in the Supplement).
The study area is characterized by low relief where irregular bedrock knobs and cuestas form hills up to 50 m high (Dredge et al., 1999). The northern part is dominated by till deposits, whereas the southern half consists more prominently of bedrock with patches of till (Hu et al., 2003). Numerous eskers and outwash complexes, mostly composed of sand and gravel, are found in the area (Dredge et al., 1994). Till deposits are differentiated by their estimated thickness into till veneer (< 2 m thick), till blanket (2-10 m thick) and hummocky till (5-30 m thick). These deposits typically have a silty sand to sand matrix with low percentages of clay and 5 %-40 % gravel (Wilkinson et al., 2001). Overburden thickness is considerable in hummocky till and till blanket of the study area (Haiblen et al., 2018) and the two surrounding map sheets (Kerr and Knight, 2007).
Soil parent materials consist of till, glacio-fluvial sediments or peat. Upland till surfaces are characterized by mud boils, earth hummocks and organic material, observed to depths of up to 80 cm, that has been redistributed within the active layer by cryoturbation (Dredge et al., 1994). The tills derived from granitic and gneissic terrain have a silty or sandy matrix, whereas those derived from metasedimentary rocks contain a higher silt-clay content (Dredge et al., 1999). Low-lying areas are mostly comprised of colluvium or alluvium rich in organics, and wet areas often have polygonal peatlands (Karunaratne, 2011).
The area is in continuous shrub tundra (Wiken et al., 1996), and common shrubs include northern Labrador tea (Rhododendron tomentosum) and dwarf birch (Betula glandulosa), while bog cranberry (Vaccinium vitis-idaea) and dwarf bog rosemary (Andromeda polifolia) often comprise the understory (Karunaratne, 2011). Well-drained upland areas are typically covered with a thin layer of lichens and mosses (Hu et al., 2003) (Fig. 2a). Grasses and sedges with a ground cover of moss comprise the vegetation cover in val- Location of the study area with respect to treeline, the mapped transition between continuous and discontinuous permafrost, and the Slave Geological Province. (c) Three zones of differing thickness of glacial sediment are apparent: zone (1) is assumed to correspond to the location of the Keewatin Dome spreading centre; zone (2) is an assumed area across which glacial erosion increases and basal conditions transition from melting to refreezing to fully frozen; and zone (3) is near the margin of the ice sheet, frequently characterized by high deposition rates and thick till. Data: Surficial Geology Map of Canada (Geological Survey of Canada, 2014c), Geological Map of Canada (Wheeler et al., 1996) and CanVec Hydro Features; Northern Canada Geodatabase (1.0); Circum-Arctic Map Of Permafrost And Ground-Ice Conditions (Brown et al., 1997). Yellowknife is the closest city and indicated with a yellow star. leys ( Fig. 2b), and some poorly drained low-lying areas have thick peat associated with ice-wedge polygons and sedge meadows (Hu et al., 2003;Karunaratne, 2011) (Fig. 2c). Frequently, low-lying areas have tall shrubs along small streams and at the rise of steeper slopes. Esker tops have little vegetation and are often comprised of exposed soil (Fig. 2d).
Field observation and sampling
In summer 2015, soil cores with a nominal diameter of 5 cm, but often irregular due to partial melting and reaming, were obtained using a diamond drill (Kryotek Compact Diamond Sampler), sectioned into 20 cm intervals and logged (soil texture, colour, ice content and visible organic matter) in the field while still frozen (Subedi, 2016;Gruber et al., 2018a). Esker locations were augered. Two soil pits were excavated within approximately 10 m of each borehole to describe and sample typical near-surface soil conditions. Where possible, samples were taken at depths of 10, 20 and 30 cm; exact sample volumes are not known. The depth of thaw at the time of sampling was estimated by probing, although this was often unsuccessful in coarse mineral soil. Drill core and pit samples were double-bagged for thawed shipment to the laboratory in Yellowknife.
Borehole locations were originally planned for investigating (a) vertical and spatial patterns of solute and organiccarbon content in soils and (b) terrain effects on ground temperature based on thermistor chains installed after drilling. For site selection, we used the surface classes of the 1 : 125 000 surficial geology map 76-D, topographic position, aerial imagery revealing surface cover such as vegetation or boulders, and a Landsat-derived index outlining the location of late-lying snow drifts. Locations were planned in clusters to simplify the logistics of moving equipment with a helicopter. The reverse-circulation winter drilling campaign during March andApril 2015 (Normandeau et al., 2016) in- Table 1. Spatial abundance of lakes, surficial geology and estimated relict ground ice for the 50 km × 50 km study area and surroundings ( Fig. 1). Surficial geology is based on the 1 : 125 000 map sheets 76-D and 76-C; percentages are relative to exposed land area, whereas values in square brackets are relative to total surface area including lakes. The abundance of relict ground ice is from O'Neill et al. (2019), who use a model based on data products at the scale of 1 : 5 000 000. During fieldwork, blocky surfaces could not be sampled as moving clasts jammed the drill rods. Sections with high gravel content or boulders resulted in slow progress and were often terminated at relatively shallow depth due to time constraints. Furthermore, these sections often resulted in low recovery, because the heating of the drill when cutting through hard rock would melt the frozen core and fines would be washed out. The complete drill logs and photographs revealed a cluster of boreholes with well-graded fine sands and pronounced ice lenses as well as till with high excess-ice content beneath upland locations. These clusters, however, do not correspond well with the surficial mapping units used. Correspondingly, four terrain types (Fig. 2) comprised of upland till, fluvially reworked till (the valley), organic terrain and eskers are used as a grouping for describing and interpreting the drill cores and soil pits at 24 locations (Table A1; see also Fig. S1 in the Supplement). The uneven depth and sparse sampling within boreholes led us to report results from multiple boreholes in combined plots).
Upland till. Ten boreholes were sampled to depths of 2.5-9.5 m in smoothly rounded hills comprised of thick till and in till veneer over bedrock. The dominant cryostructure was wavy and suspended. The dominant plant species were dwarf shrubs, Labrador tea and grasses. Thaw depths were about 2 m on hill tops and nearly a metre at the bottom of hills. The valley. Eight boreholes were drilled to depths of 1-6 m in a gently sloped valley that contrasts with other terrain types because its silts and sands are well sorted and likely derived from fluvial reworking of local tills. Boreholes located on the more elevated sides of the valley typically had coarser sediments, whereas those near its axis had mostly fine sediments with high silt contents and organics with ice-wedge polygons. The dominant cryostructure was lenticular. Few water logged sites contained tall shrubs with water channels. Sites were sparsely to moderately covered with plant species such as dwarf birch, Labrador tea and grasses. Thaw depths were 35-40 cm.
Organic terrain. Two boreholes to depths of about 4.5 m were drilled on the centres of ice-wedge polygons in peatlands with hummocky surface topography. The dominant plant species were dwarf birch and Labrador tea, with plenty of low-lying grasses. The depth of thaw was 35 cm and the permafrost table 50-70 cm deep.
Eskers. Four boreholes were drilled to depths of 1.5-12 m at hilltop locations with sparse vegetation or exposed soil.
Methods
All samples were thawed and processed at ambient temperature. Samples were homogenized, poured into beakers, weighed and allowed to settle for 12 h (see Kokelj and Burn, 2003). Volumes of sediment V s and supernatant water V w were recorded to estimate volumetric excess-ice content (%) of the permafrost samples as where 1.09 approximates the density of water divided by that of ice. The volumetric percentages of coarse fragments (> 5 mm), sand (0.074-5 mm) and fines (< 0.074 mm) in the sediment were estimated visually to the nearest 5 %. The volumetric percentage of coarse fragments (V c ) relative to the total sample volume was obtained by multiplying the estimated coarse percentage with 1−V ei /100. The length of solid rock cored per borehole was estimated from the core photographs. Supernatant water was extracted directly from samples where sufficient volume was available, and to all other samples a known amount of deionized water was added (1 : 1 extraction ratio; Janzen, 1993). These samples were mixed thoroughly and then allowed to settle for 12 h. Water was collected with a syringe and filtered through 0.45 µm cellulose filter paper. The remainder of the sample was dried for 24 h at 105 • C to determine the gravimetric water content (%), expressed on a dry basis (GWC d ) and on a wet basis (GWC w ) (see Phillips et al., 2015).
The concentration (mg/L) of the soluble cations Ca ++ , Mg ++ , Na + and K + was determined by atomic adsorption spectrophotometer at the Taiga Environmental Laboratory (Taiga Lab) in Yellowknife. Measured soluble ion concentrations C m (mg/L) were converted to an expression E using milli-equivalents per unit mass of soil (meq/100 g of dry soil): where M e is the equivalent mass of ions (g) and M 100 g w is the mass of water per 100 g of dry soil as present in the sample at the time of water extraction. Presentation of soluble cation concentrations per unit mass of dry soil facilitates comparison between samples of varying moisture contents. We sum the resulting four soluble cation concentrations to obtain total soluble cation concentration.
Organic-matter content or loss on ignition (LOI, %) is expressed on a gravimetric dry basis and was determined using the sequential loss-on-ignition method (Sheldrick, 1984) at Carleton University. A small amount (2-3 g) of the homogenized and oven-dried sample (< 0.5 mm soil fraction) was placed in a crucible and heated to 550 • C for 6 h to determine the organic-matter content as where M 105 S is the mass of sediment after oven drying at 105 • C, and M 550 S is the mass of sediment after ignition at 550 • C. Because homogenization involved crushing the oven-dried sample with mortar and pestle, any larger organic fractions like roots and plant remains are therefore part of the sample analyzed for LOI. To avoid combustion problems, reduced amounts (0.5-1 g) were processed when samples consisted of plant residue with very little visible mineral soil. When no mineral-soil component was visible after coarse components were removed, samples were not processed and an LOI of 80 % was estimated. This occurred only in the top 0.3 m and almost exclusively in samples from soil pits. The gravimetric percentage (P 0.5 ) of the < 0.5 mm soil fraction has been lost from the original analysis. Later, this was determined again based on dry sieving for 183 of 357 samples.
Data quality was assessed in a second analysis on the samples using the same procedures and tools as during the original processing. Based on measured blanks, the accuracy is about 0.03 % LOI; the median accuracy based on doubles is 0.04 % LOI with the highest difference being 0.30 % LOI.
Soil organic-carbon storage (SOCs, kg C m −2 ) was computed for comparison with soil carbon inventories (e.g., Hugelius et al., 2014). For this, soil organic-carbon concentration (SOCc, % mass) was computed as following Dean (1974), and dry bulk density (DBD, kg m −3 ), which is known to correlate with SOCc (e.g., Alexander, 1989;Bockheim et al., 2003), was approximated as following Hossain et al. (2015), who conducted their study in geologic settings similar to the project area. This resulted in an estimated DBD for the fine-grained soil, i.e., excluding the volumes V ei and V c . To account for this, soil organic-carbon density (SOCd, kg C m −3 ) was derived as and finally applied as average values over depth intervals within each terrain type to obtain SOCs. Aggregated SOCd and SOCs were reduced by the average proportion of solid rock cored though in upland till and in organic terrain below 2 m. For the samples without measured values, P 0.5 was estimated by beta regression (Cribari-Neto and Zeileis, 2010) with LOI, GWC w , and the visually estimated proportions of sand and fines as independent variables. Predictors are significant at the 0.1 % level, and residuals have a standard deviation of 11 %. Estimating P 0.5 for 176 of 357 samples statistically, parameterizing DBD and estimating V c visually introduce uncertainty in the resulting values for SOCd and SOCs. The potential magnitude of this effect on average values is estimated by computing a low-carbon scenario and a high-carbon scenario by varying V c and P 0.5 by ±10 percentage points each and DBD by ±50 kg m −3 . These deviations are subjective choices and correspond to a bias of twice the recorded precision for V c and approximately the standard deviation of model residuals for P 0.5 . The deviation for DBD is chosen to be considerable while within the variation observed by Hossain et al. (2015, Fig. 6). The averages of the resulting scenario values are 37 % lower and 38 % higher than the best estimate for SOCd that is reported and interpreted in the following sections.
Detailed grain-size distribution was measured on selected samples using a Beckman Coulter LS 13320 laserdiffraction particle-size analyzer. Samples were first ovendried at 105 • C and then crushed and homogenized with a mortar and pestle. Samples were then passed through a 2 mm sieve to remove the coarse fraction that was then weighed. Organic matter was removed from the fines using hydrogen peroxide. The samples were then mixed with Calgon to prevent flocculation and passed through the particle-size analyzer. Results were classified according to the USDA textural classification system (2 mm > sand > 53 µm > silt > 2 µm > clay).
Soil texture
Approximately 7 % of upland till (3.4 of 49.4 m of borehole depth, Table A1) consist of rock clasts larger than the Table A1 and in the Supplement. Green points show pit samples. Blue lines represent averaged values, taking into account sample depth intervals, with shaded blue areas indicating the standard error at 95 % confidence. Green shaded areas indicate the range of thaw depths for the boreholes at the time of sampling. core barrel. Eskers were augered rather than cored, and large clasts would have terminated the borehole and not be recovered. The majority of the valley cores were free of large clasts. Most soils consisted of poorly to very poorly sorted silt and sand. The relative proportion of silt was high in samples from mineral soils beneath organic terrain and in valley bottom sites with average values exceeding 40 %. Clay content was low and always below 20 %.
Ground ice
Field-logged visible-ice content is available for 113 core sections. The average, weighted by the length of core sections, is 24 %. Cryostructure is discussed in Sect. 6. Details about individual boreholes are in the Supplement, and all core photographs are available in Gruber et al. (2018a). Laboratory analyses show that water and excess-ice contents increase progressively with depth in upland till. Zones of high moisture content (Figs. 3a and 4a) were often associated with ice lenses, several centimetres thick (e.g., Figs. B1 and B3). Excess-ice content greater than 50 % in upland till became increasingly common below 4 m depth; five boreholes terminated in ice-rich material and five in rock. In organic terrain, high moisture content (> 80 %) but low excess-ice content in permafrost reflect saturated organic soils with low bulk density (Fig. 3b). The sharp decline in water content below 2 m depth coincides with a decline in organic matter contents (Fig. 4b). A notable increase in moisture and excess-ice content from 2.5-4 m depth occurred in underlying mineral soils. Profiles from the valley showed high moisture content near the surface, where organics were present, and deeper down (Fig. 3c) due to 20 %-50 % excess ice in mineral soil (Fig. 4c). Frozen and ice-poor till has been recovered near the bottom of boreholes in organic terrain and in the valley (Fig. B4). In eskers, water content was mostly below 20 %, and pore ice was the dominant ground-ice type (Fig. 3d).
Organic carbon
Organic-carbon density in the active layer was typically greater than at depth in permafrost (Fig. 5). Statistics of soil organic-carbon density and storage are given for consistent depth intervals and the four terrain types in Table 2.
Total soluble cations
The concentrations of soluble cations (meq /100 g dry soil) in organic-rich, shallow soils were mostly higher and more variable than those in mineral permafrost soils at depth (Fig. 6). In organic materials, the concentration of soluble cations near the top of permafrost was relatively high (Figs. 5b and 6b). In upland till, soluble cation concentrations, as with ice content, increased gradually with depth ( Figs. 4a and 6a). Differences between the active layer and permafrost, as well as between organic and mineral soils, are apparent from their median concentration of soluble cations; organic samples are distinguished using a threshold of 30 % LOI (see CSSC, 1998) and permafrost considered when logged as frozen. In organic samples the contrast (permafrost to active layer) was 2.02 to 0.34 meq/100 g dry soil and in mineral samples Figure 7. Heating of the drill barrel, likely due to cutting through the rock near the middle of this section, caused complete melt of ground ice in the left part of the core shown. Here, gravel was recovered but fines were likely washed out with meltwater and drilling fluids. The right side of the core is partially thawed and ice-rich. This recovered section of less than 0.5 m represents 1.3 m in borehole NGO- DD15-2033. 0.26 to 0.09 meq/100 g dry soil. The four group medians are all significantly (p < 0.01) different from each other based on Kruskal-Wallis tests. Although the dry bulk density of organic soil is lower than that of mineral soil, these patterns persist even when expressed relative to wet soil mass (p < 0.01) or the volume of water contained in the thawed sample (p < 0.05).
6 Interpretation and discussion
Ground ice
The results presented are subject to a number of biases compared to a perfectly randomized sampling within each terrain type and complete recovery of samples during drilling. Drilling induced errors in the recovery of ground ice where excessive heating of the drill barrel caused partial or complete thaw of the core (Fig. 7). Depending on the degree of thaw and core composition, this resulted in intervals erroneously shown with reduced or no excess-ice content (Fig. 4c). The results are, therefore, likely to be conservative (low biased) estimates of excess-ice and gravimetric water content at the locations sampled. The difficulty of drilling though large clasts, on the other hand, may have caused bias towards sampling locations with higher contents of excess ice and of fines. When drilling organics within polygon networks, the drill rig was placed on polygon centres. As a consequence, wedge ice, which is known to be present in the area based on the surface expression of polygon networks, is systematically avoided in sampling and, therefore, largely excluded from the present quantitative data and interpretation.
While this study was not designed to elucidate the origin of ground ice, a number of observations merit discussion. In organic terrain, the excess ice recovered resembles pool ice (clear with small bubbles and embedded peat filaments, Fig. B2b) and wedge ice (foliated with bubbles and Table 2. Mean observed soil organic-carbon density (SOCd, kg C m −3 ) per depth interval and soil organic-carbon storage (SOCs, kg C m −2 ) for the four terrain types investigated. SOCs is based on average SOCd accumulated from the surface down to the specified maximum depth. Ranges in parentheses indicate minimum and maximum values rounded to the nearest integer; the number of samples is indicated in square brackets. Values for upland till and at depths below 2 m in organics account for the abundance of rock clasts larger than the core barrel. The top 0.3 m is subject to high uncertainty arising from the uniform estimation of 80 % LOI for organic-only samples.
Organics
The some sediment, Fig. B2c), as previously reported for polygonal ground in organics (Mackay, 2000;Morse and Burn, 2013). In the valley and in organic terrain, the increase in water and excess-ice content with depth (Figs. 3 and 4) is likely due to ice segregation in frost-susceptible and relatively well graded mineral soil (Figs. B1 and B2d) with frequent reticulate cryostructure. Both the valley site, comprised of materials that are likely to be fluvially reworked tills, and organic terrain represent aggradational environments in low-lying areas where water tends to accumulate and the terrain surface gained material either through the accumulation of peat or fluvial deposition. Near the surface, permafrost aggraded upwards into Holocene deposits in these settings and, as such, does not contain relict or preserved ice within the depth of thaw prior to and during aggradation. Similarly, permafrost has aggraded in the sediment of eskers, where less fine material together with well-drained convex topography explains the absence of aggradational ice. Eskers in the study area occasionally contain relict ice, interpreted as partially derived from glacial meltwater and deposited together with the esker sediments (Dredge et al., 1999;Hu et al., 2003) and show geomorphic evidence of melt-out (Prowse, 2017). Cores from upland hummocky till, analyzed to 9 m depth (Fig. B3), were often associated with high amounts of excess ice. Ice occurred in wavy layers and sometimes as clear ice several centimetres thick (similar to Fig. 3.8c in Gruber and Haeberli, 2009). In contrast to the valley site and organic terrain, no reticulate structure and no apparent separation of consolidated fines and clear ice were visible in upland till cores. Sediment was coarser and more poorly sorted in upland till than at the valley site or beneath organics. Particles appeared to be suspended in the ice matrix, giving it a visual appearance of much lower ice content than it actually has, as reported previously for basal glacier ice facies (Knight, 1997;Murton et al., 2005). Soluble cation contents are lower in mineral-soil active layers and near-surface permafrost than at depth, consistent with near-surface leaching and preservation of underlying materials in the frozen state since deglaciation. Finally, the convex topography of upland till sites makes them well drained and differentiates them from the valley and from organic sites. These differences in cryostructure, cation content and sedimentological properties point to ice of differing origin and deposit type. The presence of hummocky topography, thermokarst lakes and thaw-slump-like features (Fig. C1) suggest melt-out of ice-rich till, and the involuted or hummocky nature of some hilltop surfaces resembles other permafrost-preserved glaciated landscapes in northwestern Canada, which are known to host relict Pleistocene ground ice (Dyke and Savelle, 2000;Rampton, 1988;St-Onge and McMartin, 1999). By contrast, Rampton (2000) invoked subglacial hydrology rather than ground-ice melt in interpreting features such as the one shown in Fig. C1, calling them (inverted) plunge pools that were caused by scouring where pressurized turbulent water was forced to change direction. Both ground ice characteristics and geomorphic features suggest that a large proportion of the excess ice in this hummocky till is Laurentide basal ice preserved beneath ablation till.
The excess-ice contents encountered in mineral soil were often 20 %-60%. As a first-order estimate, this implies that complete thaw of permafrost can cause about 0.2-0.6 m of subsidence for each vertical metre. The boreholes in upland hummocky till show an increasing trend of excess-ice content with depth, based on our limited sampling to 9 m, alone. Half the boreholes terminate in ice-rich ground and the other half in rock. Not having boreholes terminate in ice-poor till indicates that thicker sequences of ice-rich material than what has been recovered can be expected. Furthermore, an earlier winter drilling campaign (Normandeau et al., 2016) produced six boreholes in upland till. NGO-RD15-150, colocated with NGO-DD15-1014, logged 13 m of "ice" with bedrock at 16.7 m. The other boreholes (NGO-RD15-148, 155, 160) only returned minor ice content with depths between 5.5 and 7.9 m. While this provides additional context, the absence of logged ice needs to be interpreted with care give the combined difficulties of logging reverse-circulation recovery and winter conditions, and because logging ice content has not been a priority of that campaign. Surface lower-ing of several metres, with potential of up to more than 10 m, could thus be expected from areas of thick upland till if this permafrost were to thaw completely. This includes the potential for thermokarst processes to mobilize sediments, solutes and organic carbon at depth more quickly than expected in strictly conductive one-dimensional thaw. A number of geomorphic features reminiscent of retrogressive thaw slumps (Fig. C1) and the presence of kettle lakes (Prowse, 2017) in the area both exhibit local relief that suggest melt-out of massive ice several metres in thickness.
Glaciological context
In the interior of the Laurentide Ice Sheet, no supraglacial sources of debris existed. If the ice found in upland till is indeed Laurentide basal ice, then its mineral content must derive from basal entrainment. Debris-rich basal ice can result from a variety of processes that occur at glacier beds with net freezing or net melting Cuffey et al., 2000). Some of the processes of freezing involve the migration of liquid water akin to permafrost aggradation and ice lensing studied in periglacial environments (Christoffersen and Tulaczyk, 2003).
Using ice samples from the Greenland Ice Sheet, Herron et al. (1979) showed debris-laden ice in the lowermost 15 m beneath a divide, and Souchez et al. (1995) showed vertical mechanical mixing of clean ice and basal ice beneath the summit of the ice sheet. Furthermore, they pointed to the similarity of the anomalously high gas content (CO 2 and CH 4 ) to that in permafrost soils, invoking the mobilization of soils predating the ice sheet. The original basal ice prior to upward mixing has partially formed at the ground surface and is hypothesized to be a remnant of the growing stage or the original build-up of the ice sheet (Souchez et al., 1994). Assuming vertical mixing near the bed of the ice sheet, the lowermost metres of ice may thus be composed to varying degrees of ice derived from precipitation and metamorphism at the surface of the ice sheet and of materials formed from the freezing of liquid water in debris (see Gow et al., 1979). The hypothesized basal ice in the project area, however, is richer in sediment, contains coarser clasts and is thicker than that reported from ice cores beneath the summit of Greenland , especially when accounting for previously thawed material that overlies the remnant ice found today.
Most studies of basal ice from ice sheets originate from either ice coring at modern ice divides, from modern margins of Arctic ice caps that have preserved basal ice-sheet ice (Hooke, 1976), or from studies near the margins of former ice sheets, where the presence of buried basal ice in permafrost (Murton et al., 2004) is well established. The present study may provide the first evidence of basal ice in the zone a few hundred kilometres from ice divides (as conceptualized in Fig. 1c, zone 2), where rates of erosion increase and the thermal regime at the base varies (Sugden, 1977(Sugden, , 1978Boulton, 1996). These conditions are described by Hooke et al. (2013), who reconcile glaciological theory with observations from mineral prospecting. They predict the formation of thick dispersal plumes in the transition zone from basal melting to basal refreezing to a fully frozen bed that is likely to have occurred in the study area. Glacial sediment plumes are thus likely composed of directly eroded bedrock incorporated into the basal ice by, and together with, refreezing basal meltwater. Additionally, regolith and organics predating the ice sheet may be incorporated. The model of Hooke et al. (2013) is useful in explaining why we find a spatial patchwork of preserved excess ice. First, the thermal conditions at the bed of the ice sheet likely had a patchwork character in this transition zone, making the distribution of sediment plumes uneven. Second, basal ice can only be preserved where it is overlain by sediment thicker than the maximum depth of thaw during the Holocene. As spatial patterns of plume thickness, mineral content and vertical structure are expected to vary, so will the likelihood of preserving ice until today.
Estimated spatial abundance of relict ice and implications for mapping and modelling at a coarse scale
Although our results suggest relict ice preserved in areas of thick upland till, they do not easily support quantitative conclusions on the spatial extent over which this occurs. Nevertheless, we can constrain the range of plausible extent based on a few assumptions. Let us express the areal proportion underlain by some amount of relict ice as P ice = P up × P pre , with P up being the proportion of upland in a particular class of till and P pre being the proportion of area within the upland portion where preserved ice is found. Based on visual interpretation of maps and imagery, we assume P up to be 30 %-70 % in hummocky till and 5 %-30 % in till blanket. We can attempt to estimate P pre based on what we interpret as preserved ice at three drilling locations (Fig. S1, NGO-DD15-1014 and around NGO-DD15-2004 andNGO-DD15-2033) and not finding it during winter drilling at three other locations 156,160). Winter drilling with reverse circulation may produce false negatives, whereas one could argue that NGO-DD15-2004 and NGO-DD15-2033 are close and should not be counted separately. With these biases, the low number of locations sampled and the implications of varying sediment-plume characteristics in mind, let us assume P pre to be 25 %-75 %. Correspondingly, P ice is 7 %-53 % for hummocky till and 1 %-23 % for till blanket, considering till veneer and bedrock are negligible by comparison. This leads to estimated areal proportions of the land surface underlain by some relict ice of 2 %-17 % for the study area, 1 %-12 % for NTS sheet 76-D and 2 %-20 % for NTS sheet 76-C (Table 1). Even though the areal proportion underlain by relict ice is uncertain, the contrast with the recent map of O'Neill et al.
(2019) is striking (Table 1). It highlights the importance of spatial heterogeneity and of representing it at a coarse scale, where the dominant surface type alone may not be a good predictor of actual relict ice content, or ice content in general. This scale mismatch is a common challenge for permafrost models (Gruber, 2012) and more generally for models of non-linear processes (Giorgi and Avissar, 1997). The rules proposed by O'Neill et al. (2019) would likely predict relict ice in the project area similar to observations if used with the surficial geology at the 1 : 125 000 scale. By contrast, the study area has been generalized into the classes of till veneer and bedrock, exclusively, in the surficial geology map 1 : 5 000 000 that was used as input for the nationwide map of O'Neill et al. (2019). In earlier permafrost maps and models, uncertainty and variability at the sub-grid scale have been propagated into model results with high/low cases on assumed sub-grid processes and with the presentation of additional instructions for field interpretation (e.g., Boeckli et al., 2012;Gruber, 2012). Analogously, introducing a scaling relationship between the surficial geology at 1 : 125 000 and 1 : 5 000 000 would allow us to apply the rules of O'Neill et al. (2019) at the national scale while improving the representation of heterogeneity. For example, a fractional sub-grid proportion of hummocky till in till veneer at 1 : 5 000 000 could be assumed. Extending this to other classes and introducing best, high and low estimates would make the scaling issue a part of the modelling process and propagate its effects into the final results.
Organic carbon
Terrain variation in organic-carbon density and in organic matter content in fine material occur in association with surficial material and topographic setting. Organic terrain is characterized by peat deposits up to 2.5 m thick and associated with low-lying, poorly drained portions of the landscape. In other terrain types, organic materials may have become vertically redistributed in the top few metres of soil profiles by cryoturbation (Dredge et al., 1994;Kokelj et al., 2007;Haiblen et al., 2018) and by burial during permafrost aggradation due to colluviation/alluviation (e.g., Lacelle et al., 2019). The low organic-carbon density at depth likely indicates the absence of sediment reworking and permafrost preservation in upland till during the Holocene.
Mean organic-carbon density in the top 3 m of soil profiles near Lac de Gras is about half that of the circumpolar mean values reported in a recent compilation for similar soils (Table D1). For mineral soils, this is similar to the mean of about 12 (kg C m −3 ) in the northern Canadian Arctic and for organics, it is similar to the mean of about 30 (kg C m −3 ) in the southern Canadian Arctic reported for the top 1 m by Hossain et al. (2015, Fig . 5d). The low organic-carbon density in the study area, especially at depth, is interpreted to derive from the short duration of Holocene carbon accumulation following at least partial evacuation of older soil carbon by the Keewatin sector of the Laurentide Ice Sheet. While deep carbon pools are important (Koven et al., 2015), corresponding data, as reported here, are rare (Hugelius et al., 2014;Tarnocai et al., 2009). Values reported here on organiccarbon density in the top 0.3 m are subject to high uncertainty arising from the uniform estimation of 80 % LOI for organiconly samples. This is because peat usually has higher (Treat et al., 2016) and non-peat organic material lower (Hossain et al., 2015) LOI.
Total soluble cations
In mineral soil, the lower concentration of total soluble cations in the active layer (median of 0.05 meq/100 g dry soil) compared with permafrost (median of 0.25 meq/100 g dry soil) is interpreted to be caused by leaching of ions from unfrozen soil and is similar to observations in other regions (Table D2). This large contrast between mineral-soil active layers and permafrost is likely robust even though variable extraction ratios were used. Fitting a model to predict total soluble cation concentration in mineral soil from GWC w and the extraction ratio shows GWC w as a highly significant predictor, while the extraction ratio is not significant (p > 0.05). Additionally, the redistribution of ions along thermal gradients during freezing may have caused solute enrichment during the development of segregated ice (Figs. B1 and B2) in aggrading permafrost (see Kokelj and Burn, 2005) in mineral soils of the valley and beneath peat in organic terrain. There, zones of increased cation concentrations at depth corresponded with ice-rich intervals in permafrost, especially at sites in till and in organic terrain (Figs. 4 and 6). Where high amounts of organic matter are present, also the concentration of total soluble cations is high. As a consequence, mineralsoil permafrost has lower concentrations of soluble cations than organic active-layer soils but higher concentrations than mineral active-layer soils.
The absolute concentrations of soluble cations obtained in the study area near Lac de Gras are low compared to previous studies from northwestern Canada that report higher concentrations in active layer and permafrost across diverse terrain types (Table D2). In the Mackenzie Delta, alluvial materials derived from sedimentary and carbonate rock of the Taiga Plains and regular flooding produce solute-rich active-layer and permafrost deposits. A range of forest-terrain types contained more soluble cations, often several times higher, in the active layer and permafrost than the mineral soils in this study. Also in comparison with undisturbed terrain on Herschel Island, the absolute concentrations of soluble cations in our study are low. Sediments on Herschel Island are siltyclay tills that include coastal and marine deposits excavated by the Laurentide Ice Sheet (Burn, 2017). These materials can be saline below the thaw unconformity indicating permafrost preservation of soluble materials below the maximum depth of early Holocene thaw (Kokelj et al., 2002) or their concentration in colluviated materials (Lacelle et al., 2019). The low concentrations in our study area are associated with the contrasting nature and origins of surficial materials. Tills in our study region are generally coarser grained than many glacial deposits studied in the western Arctic, are regionally sourced from mostly granitic rocks and in many upland locations have been exposed to only minor postglacial landscape modification (Haiblen et al., 2018;Rampton and Sharpe, 2014). Although analytical methods are different, studies near Yellowknife (Gaanderse et al., 2018;Paul et al., 2020) also suggest low soluble cation concentrations in materials of similar origins but with contrasting depositional environments, terrain history and ecological conditions. Water from glaciolacustrine delta sediments west of Contwoyto Lake, about 120 km north of the study area, has been reported with an average concentration of 6.2 meq/L for eight samples from 3-12.4 m (Wolfe, 1998), similar to the average value of 7.5 meq/L resulting from 10 esker samples of our study in the same depth range.
The values reported must be interpreted in light of the ambiguity involved in the choice of method for extracting water from samples and in normalizing analytical results for comparison. Non-uniform extraction ratios can add uncertainty due to increased uptake of solutes from soil where water content is high (Toner et al., 2013). This effect is difficult to prevent with multiple samples having GWC w larger than 90 %, and standardization by extraction from a saturated paste as used in soil science is impractical with large clasts and high ice content. As such, the values obtained provide an imperfect comparison of the total cations that can be dissolved from the sediment within each sample. For comparison of samples, analytical results from permafrost studies are often normalized to dry soil mass (common in soil science) or water volume (common in studies of water chemistry or glaciology), either that contained in the sample or that used during extraction of solutes. Because no uniformly accepted protocol exists, comparison between studies is often challenged by differences in their methods.
Sections with relatively high solute concentrations exist in several boreholes. We hypothesize that those are, at least partially, caused by fresh rock flour produced when the diamond drill bit cuts through clasts, e.g., in NGO-DD15-2006 near 2.4 m depth. As such, the summarized concentrations we report may have a high bias.
Conclusions
The research area near Lac de Gras is characterized by a mosaic of terrain types with a high degree of fine-scale spatial variability in subsurface conditions. Permafrost there contains much more ground ice, slightly less organic carbon and fewer soluble cations compared with national and global compilation products or published research from sites in the western Canadian Arctic. This study provides quantitative data in a region with few previous studies and it supports six specific conclusions.
1. Excess-ice contents of 20 %-60 % are common, especially in samples from upland till and till-derived sediments, and the average field-logged visible-ice content is 24 %. This new regional insight improves upon coarse-scale compilations that map the area north of Lac de Gras as ice-poor (O'Neill et al., 2019;Brown et al., 1997;Heginbottom et al., 1995). Specifically, it points to the importance of scaling issues when applying models with coarse-scale input data.
2. Thick occurrences of excess ice found in upland till are likely remnant Laurentide basal ice, and aggradational ice is found beneath organic terrain and in fluvially reworked till.
3. Thaw-induced terrain subsidence on the order of metres to more than 10 m is possible in ice-rich till. Even though this study did not investigate the spatial abundance of ice-rich till, it can be estimated as 2 %-17 % for the study area. Organic terrain hosts wedge ice and is typically underlain by ice-rich mineral deposits. Future thermokarst processes may therefore result in significant landscape change and fast mobilization of sediment, solutes and carbon to a depth of several metres. Geomorphic evidence of past ground-ice melt, including thaw-induced mass wasting, exists.
4. Peatlands were found to be up to 2.5 m thick, and in till and till-derived deposits, cryoturbation and colluviation/alluviation have redistributed modest amounts of organic carbon locally to depths of 2-4 m. Mean organic-carbon density in the top 3 m of soil profiles near Lac de Gras is about half that reported in recent circumpolar statistics (Hugelius et al., 2014).
5. The concentration of total soluble cations, expressed as meq/100 g dry soil, in active-layer and permafrost mineral soils is markedly, often by 1 order of magnitude, lower in the Lac de Gras area than at other previously studied locations in the western Canadian Arctic. Mineral-soil active layers have a lower concentration of total soluble cations than permafrost. Total soluble cation concentrations are higher where soils are rich in organic matter.
6. Abundant relict ground ice and glacigenic sediments exist at locations in the interior of the Laurentide Ice Sheet and are poised for climate-driven thaw and landscape change, similar to permafrost-preserved ice-marginal glaciated landscapes where major geomorphic transformations are already observed (e.g., Kokelj et al., 2017;Rudy et al., 2017). The characteristics of thawdriven landscape change, however, are expected to differ from observations in ice-marginal positions due to differences in (a) topography and climate affecting location and timing, (b) geotechnical properties affecting stability and mobility of sediments, and (c) geochemistry affecting solute and carbon release to surface water, ecosystems and the atmosphere.
These findings highlight the importance of geological and glaciological legacy in determining the characteristics of permafrost and the potential responses of permafrost systems to disturbance and climate change. The existence of preserved Laurentide basal ice offers a unique chance to better study processes and phenomena at the base of an ice sheet. This opportunity will gradually diminish as the ice can be expected to progressively melt in the future. This future melt will partially reveal subsurface conditions through the nature and magnitude of change. Continued research on permafrost and landscape response to warming at locations in the interior of the Laurentide Ice Sheet will help to understand and predict changes specific to these landscapes characterized by a mosaic of contrasting permafrost conditions, as well as how they affect ecology, climate, land use and infrastructure. Table D1 compares soil organic-carbon densities and Table D2 soluble cation concentrations between this and previous studies. Table D1. Soil organic-carbon density (SOCd, kg C m −3 ) per depth interval for three terrain types from the Lac de Gras study area (Table 2) compared with circumpolar mean values for similar soils reported in a recent compilation (Hugelius et al., 2014, Table 2). Upland till is compared to Turbels (cryoturbated permafrost soils), eskers with Orthels (mineral permafrost soils unaffected by cryoturbation) and organics with Histels (organic permafrost soils). Circumpolar values below 1 m are for thin sediment. For Orthels, values in thick sediment are more than 10 times larger. Table D2. Concentration of soluble cations in active layer and permafrost in mineral soils compared with previous studies in northwestern Canada that employ a comparable analytical approach. In this study, active-layer values derive from pit samples and permafrost values from frozen core sections, samples below 2 m depth were used on Eskers. Values from Lacelle et al. (2019) were derived using three sequential extractions in a 1 : 10 soil water ratio on dried soils, likely yielding higher concentrations (see Toner et al., 2013), possibly by a factor of 2 or more, than what would be obtained with the method described in this study. Terrain types listed are those used in the studies referenced.
Author contributions. SG and RS wrote the manuscript together with SVK. RS conducted the initial study, performed or oversaw the laboratory analyses, and produced the scripts for plotting Figs. 3-6. SG produced Fig.1, the Supplement, the sections on ground ice, glaciology, soil organic-carbon density, and framing and conclusions.
Competing interests. The authors declare that they have no conflict of interest.
Acknowledgements. This research was part of the Slave Province Surficial Materials and Permafrost Study (SPSMPS) supported by the Canadian Northern Economic Development Agency, Dominion Diamond Mines and the Northwest Territories Geological Survey. Additional support was obtained from the Natural Sciences and Engineering Research Council of Canada and ArcticNet. We thank Barrett Elliott and Kumari Karunaratne for their great support in this project and Chris Burn for his advice. We thank Julia Riddick and Rosaille Davreux for their help in soil sampling; Nick Brown, Luca Heim and Christian Peart for field assistance; Jerry Demorcy and Elyn Humphreys for their help in laboratory analysis; Cameron Samson for helping with lidar data; and the Taiga Lab in Yellowknife for their assistance with the laboratory analysis of the samples. We acknowledge the help of Shintaro Hagiwara and the Carleton Centre for Quantitative Analysis and Decision Support with data smoothing in the profile figures, as well as Ariane Castagner and Nick Brown for sorting out the samples for reprocessing. Interactive comments by two anonymous referees and by Stephen Wolfe have helped to improve this paper. This is NTGS contribution no. 0130.
Financial support. This research has been supported by the Natural Sciences and Engineering Research Council of Canada (grant no. RGPIN-2015-06456).
Review statement. This paper was edited by Christian Hauck and reviewed by two anonymous referees.
|
v3-fos-license
|
2019-04-13T08:38:28.124Z
|
2014-05-23T00:00:00.000
|
119273765
|
{
"extfieldsofstudy": [
"Materials Science",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1364/optica.1.000137",
"pdf_hash": "3713aca9f646915056a3a90a2ee5b6721d608441",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46486",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "3713aca9f646915056a3a90a2ee5b6721d608441",
"year": 2014
}
|
pes2o/s2orc
|
Investigation of Mode Coupling in Normal Dispersion Silicon Nitride Microresonators for Kerr Frequency Comb Generation
Kerr frequency combs generated from microresonators are the subject of intense study. Most research employs microresonators with anomalous dispersion, for which modulation instability is believed to play a key role in initiation of the comb. Comb generation in normal dispersion microresonators has also been reported but is less well understood. Here we report a detailed investigation of few-moded, normal dispersion silicon nitride microresonators, showing that mode coupling can strongly modify the local dispersion, even changing its sign. We demonstrate a link between mode coupling and initiation of comb generation by showing experimentally, for the first time to our knowledge, pinning of one of the initial comb sidebands near a mode crossing frequency. Associated with this route to comb formation, we observe direct generation of coherent, bandwidth-limited pulses at repetition rates down to 75 GHz, without the need to first pass through a chaotic state.
Recently high quality factor (Q) microresonators have been intensively investigated for optical comb generation. Both whispering gallery mode resonators employing tapered fiber coupling and chip-scale microresonators employing monolithically fabricated coupling waveguides are popular. Tuning a continuous-wave (CW) laser into resonance leads to build-up of the intracavity power and enables additional cavity modes to oscillate through cascaded four-wave mixing (FWM) [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Modulational instability (MI) of the CW pump mode is commonly cited as an important mechanism for comb generation [16][17][18]. According both to experiment and to theoretical analysis, comb generation preferably occurs in resonators with anomalous dispersion. However, comb generation in resonators characterized with normal dispersion has also been observed experimentally [5,8,[19][20][21][22][23][24]. Several models have been proposed to describe this phenomenon. Although MI gain is missing in fibers or waveguides with normal dispersion, when it comes to resonators, the detuning provides an extra degree of freedom which enables MI to take place in the normal dispersion regime, hence providing a route to comb generation [16,18,25]. However, this mechanism requires either a precise relationship between detuning and pump power, making it difficult to realize practically, or hard excitation, a nonadiabatic process under which pump photons must be initially present in the resonator [17].
Mode coupling has also been suggested as a mechanism enabling comb generation in resonators with normal dispersion [26]. When resonances corresponding to different families of transverse modes approach each other in frequency, they may interact due to imperfections in the resonator. The theory of mode coupling in resonators has been well-established [27], and frequency shifts and avoided crossings have been observed [28][29][30][31][32][33]. In the anomalous dispersion regime, mode coupling has been reported to affect the bandwidth scaling of frequency combs [34] and the process of soliton formation [35]. However, in these cases anomalous dispersion is still considered to be the determining factor for comb generation; mode coupling is considered to be detrimental, inhibiting the formation of solitons and limiting comb bandwidth. In the normal dispersion regime, measurements have been performed with CaF2 whispering gallery mode resonators [26]. The experiments demonstrate strong local frequency shifts that are attributed to mode interactions and show a correlation between the presence of such local frequency shifts and the ability to generate combs in these normal dispersion resonators. Significant changes in comb spectra have been observed when pumping different longitudinal modes spaced by only a few free spectral ranges (FSR), both in normal dispersion silicon nitride microring resonators [5] and in the whispering gallery mode resonators of [26]; in the latter case, such effects were specifically attributed to mode interactions.
In the current report, we perform comb-assisted precision spectroscopy measurements [36] of few-moded silicon nitride microresonators in the normal dispersion regime over frequency ranges spanning dozens of FSRs. As a result we are able to clearly map out mode interactions and obtain plots of resonant frequencies exhibiting strong avoided crossings closely analogous to those that occur for quantum mechanical energy surfaces [37][38][39]. The frequency shifts affecting a series of resonances from both mode families can lead to a strong change in local dispersion, even changing its sign. We provide clear experimental evidence that this mode coupling plays a major role in the comb generation process for our normal dispersion resonators by showing experimentally, for the first time to our knowledge, that the location of one of the two initial sidebands at the onset of comb generation is "pinned" at a mode crossing frequency, even as the pump wavelength is changed substantially [40].
These effects allow us to realize a "Type I" comb [5], also termed a natively mode-spaced (NMS) comb [41] in a resonator with FSR slightly under 75 GHz. In such a comb the initial sidebands are generated via a soft excitation mechanism and are spaced one FSR from the pump; the comb exhibits low noise and high coherence immediately upon generation [5,[20][21][22]41]. We also find that the Type-I comb as generated here corresponds directly to a train of bandwidth-limited pulses. This is in sharp contrast to "Type II" combs [5] (also termed multiple mode-spaced (MMS) combs [41]) for which the initial sidebands are separated from the pump by several FSR, after which additional, more closely sidebands are generated (usually with increasing intracavity power) to arrive at single FSR spacing. Such Type II combs exhibit poor coherence and high noise [5,21,41,42]. Mode-locking transitions in which Type II combs switch into a coherent, low noise state have been observed experimentally and studied theoretically [17, 23-25, 35, 41, 43-54]. However, these methods require careful and sometimes complex tuning of the pump frequency or power [35]; the mode-locking transition is often difficult to achieve and until very recently has not been observed in normal dispersion microresonators. Our recent demonstration of dark soliton formation in resonators with normal dispersion is linked to a mode-locking transition [23], but the waveforms generated are quite distinct from the bandwidth-limited pulses reported here.
The field in the microresonators can be expressed using the following mode coupling equations: Here and are the intracavity fields for mode 1 and 2 respectively, and are decay rates due to the intrinsic loss for both modes while and are coupling rates between the resonator and the bus waveguides. and are the frequency detunings where and are the resonant frequencies.
are mode coupling coefficients. We can simulate mode interaction effects using the mode coupling equations. Two modes are assumed working close to 1550nm in the resonator with , and , , respectively. This corresponds to two modes working in the under-coupling regime: mode 1 with loaded Q of 10 6 , intrinsic Q of and extinction ratio of 10 dB; mode 2 with loaded Q of , intrinsic Q of and extinction ratio of 5 dB, respectively. We solve eq. (1) and plot the resulting transmission spectra for different separations (over the range -5 GHz to 5 GHz) between the resonances of the two modes. Without mode coupling ( , Fig. 1(a)), the resonances approach and cross each other at a constant rate. However, with mode coupling turned on ( compared with ⁄ , Fig. 1(b)), the dips in transmission are shifted in frequency, resulting in an avoided crossing. The mode interactions also lead to significant changes in the extinction ratios and linewidths of the resonant features. Similar effects are observed in our experiments, as we relate below. Our experiments utilize silicon nitride resonators fabricated to have 2 μm × 550 nm waveguide cross-section. According to simulation for two transverse electric (TE) modes, TE1 and TE2, these waveguides are clearly in the normal dispersion regime with and respectively [20]. We first study a resonator with a total path length of 5.92 mm which corresponds to a FSR slightly under 25 GHz. Similar to [14], to avoid the stitching error we introduce a fingershaped structure for the resonator so that it can fit in a single field of our electron beam lithography tool. Figure 2(a) shows a microscope image of the microresonator. The light is coupled both in and out through lensed fibers which are positioned in Ugrooves to improve stability when working at high power [20]. Fiber-to-fiber coupling loss is ~5 dB. The measured transmission spectrum, showing resonances throughout the lightwave C band, is given in Fig. 2(b). If we zoom in the transmission spectrum as shown in the inset, resonances of 2 transverse mode families with different depth can be observed. The loaded Q factors at the frequencies shown are ca. 1×10 6 (intrinsic Q~1.7×10 6 ) and 0.3×10 6 (intrinsic Q~0.35×10 6 ) for modes 1 and 2, respectively.
We use the frequency-comb-assisted spectroscopy method of [36] to accurately determine the resonance positions and compute the changes in FSR with wavelength to estimate the dispersion for TE modes. The measured FSRs are given in Fig. 3(a). The FSRs for the two modes are around 24.8 GHz and 24.4 GHz, respectively. Both modes are fitted with our simulated dispersion showing good accordance which confirms that the resonator is in the normal dispersion regime for TE modes with , where denotes the difference in FSR for adjacent resonances which can be expressed as: (2) However, at several wavelengths for which the resonances associated with the two transverse modes are closely spaced (1532 nm, 1542 nm and 1562 nm), the FSRs of the two modes change significantly, such that their FSRs become more similar. In these cases we clearly observe that the mode coupling results in a major modification to the local dispersion, even changing the sign of dispersion in some wavelength regions. To take a closer look at this phenomenon, in Fig. 3(b) we plot the transmission spectrum in the vicinity of the mode crossing regime near 1542 nm. To visualize the data in a form analogous to the simulations of Fig. 1, we vertically align different pieces of the transmission spectrum separated by a constant 24.82 GHz increment (the nominal FSR of the higher Q mode around 1542 nm). Since the average dispersion contributes a change in FSR below 15 MHz in the range plotted without mode coupling, one of the modes should appear as very nearly a vertical line, while the other should appear as a tilted line due to the difference in FSRs. However in Fig. 3(b), we observe that the curves bend as they approach each other, resulting in an avoided crossing, similar to the simulation results of Fig. 1(b). Changes in the extinction ratio of the resonances are also clearly evident in the mode interaction region. These data provide detailed and compelling evidence of strong mode coupling effects on the linear spectrum.
A different case for the mode crossing is observed around 1552 nm. Here there is no obvious change in FSR around the wavelength where the resonances of these modes get close. The aligned resonance pairs are shown in Fig. 3(c). The picture resembles the case shown in Fig. 1(a), where no mode coupling is assumed. However, if we zoom-in on the data, we can again see slight shifts in the positions of the resonances when they are close enough. In this case mode coupling effects are present but weak.
In comb generation experiments, we pumped the micro-resonator with a single CW input at 1.75W (this is the value prior to the chip, without accounting for coupling loss) tuned to different resonances of the high-Q mode family and recorded the comb spectra. The results are given in Fig. 4. In Fig. 4(a) we pump at 28 different resonances between 1554 and 1560 nm. The frequency spacing of the comb varies from 33 FSR for pumping at 1554 nm to 7 FSR for pumping at 1559.4 nm. We observe that the nearest long wavelength sideband remains anchored at approximately 1560.5 nm, very close to the ~1562 nm mode interaction feature. With the pump shifted by a total of 694 GHz (27 FSRs), we find that the long wavelength sideband varies by no more than ( ). Meanwhile the short wavelength sideband varies at twice the rate of the pump tuning, for a total frequency variation of ~1.3 THz. Similar behavior is observed when we pump between 1546nm to 1549.5 nm. As shown in Fig. 4(b), one of the sidebands is anchored near 1550.5 nm, close to the weaker 1552 nm mode interaction feature. In this case comb generation is missing for some pump wavelengths, which may be the result of the weak coupling strength. The observed pinning of one of the initial sidebands very close to a mode interaction feature clearly suggests that mode coupling is a major factor in comb generation in this normal dispersion microresonator. According to the anomalous dispersion analysis of Ref. [41], increasingly large dispersion is needed to generate "NMS" (Type I) combs as the resonator FSR decreases. Physically the increased dispersion brings the MI gain peak closer to the pump. In order to reduce the gain peak to single FSR frequency offset, it was shown that the parameter should be made close to the resonance width (200 MHz for silicon nitride resonators with Q~10 6 ). For example, for a resonator with FSR~100 GHz, the dispersion required for the generation of a Type I comb would be D ≈ . Furthermore, according to Eq.(2), the dispersion required grows quadratically as the resonator size is further increased (required D grows as inverse square of the FSR). Such dispersions are generally too large to achieve practically; perhaps as a result, no observation of Type I comb generation in sub-100 GHz silicon nitride resonators has been reported. However, mode coupling can dramatically change the local dispersion, both increasing its magnitude and changing its sign. Generation of Type I combs from large whispering gallery mode (WGM) resonators has previously been reported and attributed to mode coupling [25,26]. In our experiments we have observed Type I comb generation in SiN for a resonator with a FSR slightly below 75 GHz. We have not yet obtained a NMS comb from resonators with even smaller FSR. However, as shown in Fig. 4(b), a comb with 2-FSR separation is observed when the 25 GHz FSR resonator is pumped at 1550.41 nm. This means that the 1 st sideband is less than 50 GHz from the pump, which is still very difficult to achieve without mode coupling effects.
The fabricated ~75 GHz FSR resonator is shown in Fig. 5(a). Unlike the 25-GHz resonator discussed earlier, it has a dropport design which has been observed to reduce the power difference between the pump and adjacent comb lines, yielding a smoother comb spectrum without the usual strong pump background [20]. Using the frequency-comb-assisted spectroscopy method, two mode families with FSRs around 74.7 GHz and 72 GHz are observed. The two families of resonances approach each other around 1563 nm. Different sections of the transmission spectrum are aligned in a similar fashion as for Fig. 3(b-d) and plotted in Fig. 5(b). An avoided crossing evidencing mode coupling is clearly observed. The comb results are shown in Fig. 5(c) for pumping between 1554.71 nm and 1566.59 nm. Again one of the 1 st sidebands is "pinned" near the mode crossing wavelength. Although the first sideband has a 13-FSR separation when pumping at 1555 nm, a Type I comb can be generated for pumping at 1562.62 nm, 1563.22 nm, 1563.81 and 1564.43nm (as shown in the circled area). We note that in this case, pinning of the 1 st sideband is observed for pumping at either the blue side or the red side of the mode crossing area. As an example, pumping at 1562.62 nm with 1.6W input, more than 20 comb lines with 1 FSR separation are generated. The spectrum observed at the drop port is shown in Fig. 6(a). Fifteen of the lines are selected by a bandpass filter and amplified in an EDFA; the resulting spectrum is shown in Fig. 6(b). The amplified and filtered comb is directed to an intensity autocorrelator based on second harmonic generation in a noncollinear geometry. A length of dispersion compensating fiber (DCF) is used to achieve dispersion compensation of the entire fiber link (including the EDFA) connecting the SiN chip to the autocorrelator. The length of DCF was adjusted by injecting a short pulse laser from a passively mode-locked fiber laser into the front end of the fiber link and minimizing its autocorrelation width. The autocorrelation trace measured for the comb is plotted in Fig. 6(c). Also plotted is the autocorrelation of the ideal bandwidth-limited pulse, calculated from the spectrum of Fig. 6(b) with the assumption of flat phase. Clearly the generated pulses, with estimated duration of 2.7 ps FWHM, are very close to bandwidth-limited. We have also used a photodetector and spectrum analyzer to look at the low frequency intensity noise of the comb (measurement bandwidth: ~500 MHz). The intensity noise is below the background level of our measurement setup. Similar low noise, bandwidth-limited pulse generation is observed for the "Type I" combs generated via pumping at other resonances of this same resonator. These data demonstrate that the Type I combs reported here are generated directly in a mode-locked state featuring low noise, high coherence, and bandwidth-limited temporal profile, though with a limited number of comb lines.
Our group has previously reported direct Type I generation, with behavior similar to that shown in Fig. 6, from a smaller, normal dispersion SiN microresonator with ~230 GHz FSR [20]. Although we speculated that mode interactions may have played a role in allowing comb generation, as pointed out theoretically in [26], we were unable to present data to support this speculation. Based on the insight developed in the current paper, we decided to reexamine our data from the device of [20]. Figure 7 shows the comb spectra obtained for pump powers just above threshold, plotted in the same fashion as Figs. 4 and 5(c). This format clearly shows pinning of one of the initial sidebands, revealing what we now understand to be a signature of comb initiation through mode coupling.
The pathway to coherent pulse generation reported here is clearly distinct from that observed in [23,24,35,42], which attain broader comb bandwidths but need to navigate through a chaotic state before arriving at a transition to mode-locking. Although a theoretical explanation for this phenomenon is still not fully available, our simulation using the Lugiato-Lefever (L-L) equation [23,45,46,55] shows that stable, coherent Type I combs can be induced in normal dispersion silicon nitride resonators by introducing a phase shift term (as suggested by Ref. [26] to model the effect of mode interaction) to a single resonance adjacent to the pump resonance. However, the Type I combs in our simulations, though coherent, do not correspond to bandwidth-limited pulses; instead the comb field is phase modulated and temporally broadened. This discrepancy suggests some factors may be missing in the model. One possibility is that instead of adding a phase shift to a single resonance, phase shifts should be assigned to a group of resonances distributed around the mode interaction region. Another, more radical idea is that a heretofore unidentified nonlinear amplitude modulation mechanism may be present, as suggested briefly in Ref. [20,41]. Mode interactions may contribute to such a mechanism, since a superposition of transverse modes leads to a longitudinally modulated spatial profile which may either increase or decrease overlap with waveguide imperfections. Nonlinearity could shift such spatial profiles, under appropriate circumstances reducing loss, analogous to decreased loss through nonlinear lensing in Kerr lens mode-locked lasers. Another possibility is that wavelength dependent Q introduces a spectral filtering effect which contributes to shaping the time domain field, as suggested in Ref. [24]. Observation of "Pinned" 1 st sidebands for the resonator with passively mode locked "Type I" comb as discussed in Ref. [20] Finally, we note that previous studies have associated mode coupling with asymmetric comb spectra [19,26]. However, mode interaction spectra such as those in Figs. 3 and 5 were neither reported nor registered with the generated combs. In our experiments asymmetric spectra were observed for both resonators studied; Fig. 8 shows four examples with the mode interaction region identified through the pinned 1 st sideband. For the resonator with ~25 GHz FSR, the separations between the pump and the 1 st sideband are 15 FSR and 2 FSR in Figs. 8(a) and 8(b), respectively; for the ~75 GHz FSR resonator, the pump position is changed from the short wavelength side of the coupling region ( Fig. 8(c)) to the long wavelength side (Fig. 8(d)). In each case the 1 st sideband "pinned" close to the mode crossing has higher power than the 1 st sideband on the other side of the pump. However, fewer comb lines are generated on the side of the pump corresponding to the pinned sideband. We may understand this behavior by noting that flattened dispersion is favorable for broadband comb generation [13,46,56]. In our experiments mode coupling modifies the local dispersion, allowing MI gain for initiation of comb generation but at the same time giving rise to significant higher order dispersion that limits growth of the comb bandwidth on the mode interaction side.
In summary, we have demonstrated what we believe to be conclusive evidence of the impact of mode coupling on initiation of comb generation in normal dispersion silicon nitride microresonators. We have also demonstrated mode-coupling-assisted "Type I" comb generation resulting in direct generation of bandwidth-limited pulses, without the need to navigate through a chaotic state. Fig.8 Asymmetric comb spectra. (a) and (b) comb generation using resonators with ~25 GHz FSR, with "Pinned" sideband close to 1551nm; (c) and (d) comb generation using resonators with ~75 GHz FSR, with "Pinned" sideband close to 1563 nm.
|
v3-fos-license
|
2019-04-14T13:02:46.176Z
|
2019-04-01T00:00:00.000
|
111390294
|
{
"extfieldsofstudy": [
"Medicine",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1944/12/7/1189/pdf",
"pdf_hash": "4d62958070615742eb6bfa07e7e0e8178f0081fa",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46489",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Engineering"
],
"sha1": "4d62958070615742eb6bfa07e7e0e8178f0081fa",
"year": 2019
}
|
pes2o/s2orc
|
Aging Mechanism and Properties of SBS Modified Bitumen under Complex Environmental Conditions
Bitumen aging can lead to the deterioration of asphalt pavement performance, shortening the service life of road. In order to solve the problem that current studies on the ultraviolet (UV) aging of bitumen either ignore the effects of natural environmental conditions or only consider the effects of water. In this study, different aqueous media and UV coupled simulated aging tests were carried out on virgin bitumen and styrene butadiene styrene (SBS) modified bitumen in a UV environment chamber. The combination of macroscopic performance tests and microstructure tests was used to analyze the physical, rheological, and microstructure changes of virgin bitumen and SBS modified bitumen after The film oven test (TFOT) aging and UV aging in different environments (UV, UV + Water, UV + Acid, UV + Salt). Dynamic shear rheometer (DSR) results indicated that UV aging results in the increase of rutting factor and the improvement of rutting resistance at high temperature. The Fourier transform infrared spectrum (FTIR) results illustrated that the bitumen would be oxidized and SBS would be degraded under ultraviolet radiation. The four-component analysis test results showed that light component migrated to the heavy component during the aging process. Moreover, water will aggravate the UV aging of bitumen, and the presence of acid or salt worsens ultraviolet aging.
Introduction
As a kind of material with superior performance, styrene butadiene styrene (SBS) modified bitumen has been widely used in pavement construction and maintenance [1,2]. However, the performances of SBS modified bitumen deteriorate under unfavorable environmental conditions and vehicle loads, which is partly due to aging under ultraviolet (UV) radiation and high temperature environment [3][4][5]. Bitumen aging can be divided into thermal aging and photoaging [6,7]. Thermal aging mainly occurs in mixture mixing, transportation, paving, and other pavement construction stages. Photoaging, which is mainly caused by ultraviolet radiation, mainly occurs in the pavement service life. The aging properties of bitumen or bituminous mixtures are always illustrated based on the results of a UV aging test that either ignores the effects of natural environmental conditions or only considers the effects of water. The result that was obtained under such conditions failed to precisely
Materials
The virgin bitumen that was used in our research is road petroleum bitumen with a penetration grade of 70 and a quality grade of A produced by Maoming Branch of Sinopec, and the SBS modifier is linear structure produced by Yanshan Petrochemical Company of China. Tables 1 and 2 shows the main physical properties of virgin bitumen and SBS.
Preparation of SBS Modified Bitumen
The virgin bitumen in melting state (135 • C) was mixed with 5% (mass of the virgin bitumen) SBS modifier, the SBS was evenly dispersed to the virgin bitumen by hand stirring for 2-3 min, and then the blend was heated to 160 • C in a multipurpose electric furnace (CS-2.5, Xinghua Macro Industry Electric Equipment Factory, Jiangsu, China). Afterwards, it was subjected to rotary shearing at 4000 r/min for 60 min at 160 • C while using a high-speed shear (FM300, Shanghai FLUKO Technology Development Co., LTD., Shanghai, China). After the rotary shearing was completed, it was placed in an oven (101-II, Cangzhou Dongda Test Instrument Co., LTD., Hebei, China) at 160 • C for swelling for 1.5 h to obtain SBS modified bitumen.
Preparation of Water, Acid and Salt Solutions
A distillation machine prepared the distilled water (Shenzhen Yiliyuan Water Treatment Equipment Co., LTD., Shenzhen, China). The main components of acid rain are NH 4+ , Ca + , Na + , K + , Mg + , H + , SO 4 2− , HSO 4 2− , NO 3 − , HCO 3 − , Cl − , etc. The concentration of SO 2− 4 in acid rain is 5-10 times higher than that of NO − 3 . In this paper, the acid solution was prepared by analytically pure sulfuric acid and nitric acid and distilled water, according to SO 2− 4 : NO − 3 = 9:1 and PH = 3 [28,29], which is measured by PH-meter (PHS-25, Shanghai INESA Scientific Instrument Co., Ltd., Shanghai, China). Sodium chloride crystals and distilled water at a concentration of 7% prepared the salt solution. Figure 1 shows the distilled water and two solutions.
Aging Procedure
The film oven test (TFOT) was selected to simulate the thermal oxidation of bitumen during mixing and paving of bituminous mixtures according to ASTM D1754 (2014). To ensure that the thickness of the bitumen film was about 3 mm, 50 ± 0.5 g of virgin bitumen or SBS modified bitumen was placed on a Φ140 ± 0.5mm iron pan. Subsequently, the bitumen film specimens were placed in a film oven to rotate at a speed of 5.5 r/min and aged for five hours (163 °C). The samples of SBS modified bitumen and virgin bitumen after TFOT aging were exposed to ultraviolet for seven days in a self-made ultraviolet environment simulation chamber (see Figure 2). The environment chamber adopts a LED cold light source, the main ultraviolet t wave is 365 nm, and the temperature is 25 °C. The SBS modified bitumen and virgin bitumen UV aging test is set to four modes, namely UV, UV + water, UV + acid, and UV + salt. For the UV + water, UV + acid, light + UV mode, and spray 1 g of water/acid/salt on the surface of the bitumen sample on a daily basis to simulate the adverse environmental conditions of the asphalt pavement.
Physical Properties Test
At present, the penetration, softening point, and viscosity are used as the main indexes to evaluate bitumen performance in most countries. The physical characteristics of the SBS modified
Aging Procedure
The film oven test (TFOT) was selected to simulate the thermal oxidation of bitumen during mixing and paving of bituminous mixtures according to ASTM D1754 (2014). To ensure that the thickness of the bitumen film was about 3 mm, 50 ± 0.5 g of virgin bitumen or SBS modified bitumen was placed on a Φ140 ± 0.5mm iron pan. Subsequently, the bitumen film specimens were placed in a film oven to rotate at a speed of 5.5 r/min and aged for five hours (163 • C). The samples of SBS modified bitumen and virgin bitumen after TFOT aging were exposed to ultraviolet for seven days in a self-made ultraviolet environment simulation chamber (see Figure 2). The environment chamber adopts a LED cold light source, the main ultraviolet t wave is 365 nm, and the temperature is 25 • C. The SBS modified bitumen and virgin bitumen UV aging test is set to four modes, namely UV, UV + water, UV + acid, and UV + salt. For the UV + water, UV + acid, light + UV mode, and spray 1 g of water/acid/salt on the surface of the bitumen sample on a daily basis to simulate the adverse environmental conditions of the asphalt pavement.
Aging Procedure
The film oven test (TFOT) was selected to simulate the thermal oxidation of bitumen during mixing and paving of bituminous mixtures according to ASTM D1754 (2014). To ensure that the thickness of the bitumen film was about 3 mm, 50 ± 0.5 g of virgin bitumen or SBS modified bitumen was placed on a Φ140 ± 0.5mm iron pan. Subsequently, the bitumen film specimens were placed in a film oven to rotate at a speed of 5.5 r/min and aged for five hours (163 °C). The samples of SBS modified bitumen and virgin bitumen after TFOT aging were exposed to ultraviolet for seven days in a self-made ultraviolet environment simulation chamber (see Figure 2). The environment chamber adopts a LED cold light source, the main ultraviolet t wave is 365 nm, and the temperature is 25 °C. The SBS modified bitumen and virgin bitumen UV aging test is set to four modes, namely UV, UV + water, UV + acid, and UV + salt. For the UV + water, UV + acid, light + UV mode, and spray 1 g of water/acid/salt on the surface of the bitumen sample on a daily basis to simulate the adverse environmental conditions of the asphalt pavement.
Physical Properties Test
At present, the penetration, softening point, and viscosity are used as the main indexes to evaluate bitumen performance in most countries. The physical characteristics of the SBS modified
Physical Properties Test
At present, the penetration, softening point, and viscosity are used as the main indexes to evaluate bitumen performance in most countries. The physical characteristics of the SBS modified bitumen and virgin bitumen were detected, for instance, the penetration (25 • C) and softening point, according to ASTM D36, ASTM D5, respectively.
Dynamic Shear Rheometer (DSR)
A dynamic shear rheometer (Physica MCR 301, Anton Paar Instruments, Ostfildern, Germany) was applied to conduct temperature sweep tests on virgin bitumen and SBS modified bitumen under different aging state according to ASTM D7175. All of the tests were performed using constant-strain mode at a fixed frequency of 10 rad/s. In order to ensure the deformation of the bitumen sample within the nonlinear viscoelasticity range, as for the virgin bitumen, the strain control values of the temperature sweep test for the original sample, the TFOT sample, and the UV-aged sample were 12%, 10%, and 1%, respectively. Correspondingly, the ones for SBS modified asphalt were 3%, 3%, and 1%. The temperature ranged from 40 • C to 90 • C with an increment of 2 • C /min. The gap and the diameter of parallel plates were 1 mm, and 25 mm, respectively. Rheological parameters, such as complex shear modulus (|G * |), phase angle (δ), and rutting factor (|G * |/sin δ) as a function of temperature for all samples were applied to evaluate the rheological properties of the bitumen in the aging process.
Fourier Transform Infrared Spectroscopy (FTIR)
Fourier transform infrared spectroscopy (FTIR) is considered as one of the most promising employed technique to detect the chemical functional groups that are present in polymer chains. In general, different functional groups correspond to absorption peaks at different wavenumbers in the infrared spectrum, and the materials can be qualitatively and quantitatively analysed according to the appearance of the absorption peaks and their intensity and peak area. In this paper, the Bruker Tensor 27 Fourier transform infrared spectrometer (Bruker Corporation, Karlsruhe, Germany) was used to characterize the functional group changes of polymer modified bitumen after aging. The spectra were obtained, ranging from 4000 cm −1 to 500 cm −1 with a resolution of 1~0.4 cm −1 . The wavenumber accuracy is 0.01/2000 and the absorption precision is 0.01%.
Scanning Electron Microscope (SEM)
As an important research method in material science, scanning electron microscope (SEM, S-3000N, Hitachi Limited, Tokyo, Japan) is used to observe the morphology and structure of materials. The principle is that, by emitting high-energy electron beam to the sample, the electron beam will produce the secondary electron and the scattered electron, and so on, and the information will be changed from the optical signal to the electric signal through the systematic processing. After the video amplifier is amplified, the image of the sample surface can be formed on the screen of the picture tube. In this paper, in order to observe the surface cracking characteristics of the aged bitumen samples, scanning electron microscope (S-3000N, HITACHI, Japan) was operated at 5 kV, with a magnification of 300. The specimens were successively sputter coated with a thin gold film prior to making specimens charged for SEM observation. Table 3 summarizes the effects of UV radiation on the physical properties of SBS modified bitumen and virgin bitumen in different environments. UV aging results in the decrease of penetration and the increase of the softening point of virgin bitumen and SBS modified bitumen. It is due to the migration of light components of bitumen to heavy components by ultraviolet radiation, the breaking of the C=C double bond of the elastic polybutadiene chain in SBS, and the degradation of SBS. As a result, SBS modified bitumen becomes harder and brittle after aging, so its penetration decreases and softening increases. The effect of acid or salt medium on the softening point and penetration of SBS modified bitumen is greater than that of pure water medium, and the change of penetration and softening point of the two kinds of bitumen is the least under ultraviolet radiation in the dry environment. Aging will lead to the bitumen to become hard and brittle, so the penetration of the virgin bitumen and SBS modified bitumen decreases and the softening point increases. It is indicating that moisture will accelerate the UV aging of bitumen, acid, or salt will further accelerate the corrosion and aging of bitumen, and further affects its rheology and mechanics. It is also believed that water, acid, and salt can invade into pavement through the cracks and cause more serious damages to the pavement structure as the micro-cracks appear [12].
Rheological Properties
The high temperature rheological properties of virgin bitumen and SBS modified bitumen after short term aging, UV aging, and UV aging of different moisture media are shown in Figure 3 (BA and SBS represent virgin bitumen and SBS modified bitumen, respectively). Strategic highway research program recommends using the rutting factor (|G * |/sin δ) to evaluate the high temperature rutting resistance of bitumen. The greater the |G * |/sin δ is, the better the high temperature rutting resistance performance. The rutting factor of virgin bitumen and SBS modified bitumen has been improved after UV aging and UV aging of different moisture medium, and the one of acid medium is the highest. UV aging increases the rutting factor (|G * |/sin δ) of bitumen, which has a certain improvement effect on the high temperature performance of bitumen.
In sum, the effect of water, acid, and salt cannot be negligible for the bitumen during the aging process. In acidic media, on the one hand, acidic substances, such as carboxylic acids and phenols in the bitumen and aging products of bitumen are dissolved and ionized, and on the other hand, H + , SO 2− 4 , and NO − 3 in the acid solution react with the active groups in the bitumen, so that the molecular bonds of bitumen are broken and destroyed. The dual action of dissolved radiation and chemical corrosion accelerates the aging of the bitumen. In salt media, the salt will accumulate in the surface of bitumen during the evaporation of water, and change the continuous state of materials [12]. Although bitumen has been regarded as a kind of waterproof material, moisture will also enter the interior of the bitumen with the generation of micro-cracks on the bitumen surface. Under UV radiation, the saturated and aromatic components of bitumen are slightly dissolved in water, so the light component of the bitumen decreases, which accelerates the aging of the bitumen. In conclusion, water, acid, and salt can accelerate the ultraviolet aging of bitumen.
Four-Component Analysis
The contents of four components of virgin bitumen and SBS modified bitumen after short-term aging, UV aging, and UV aging of different moisture media are shown in Figure 4 and Figure 5. After UV aging, the changes of the four components of SBS modified bitumen and virgin bitumen are basically the same. Ultraviolet radiation results in the change of four components' content of bitumen, with the saturated content basically remaining the same, resin and aromatic content decreasing slightly, and the asphaltene content increased. Changes in the four components content of the bitumen will inevitably result in changes in the macroscopic properties of the bitumen. The results of the four-component analysis test are consistent with the results of the DSR test and the physical property test. Asphaltene is the heavy component with the largest molecular weight among the four components. During the aging process, some low molecular weight substances will undergo polycondensation and migration to heavy components, so the asphaltene content changes the most. The relative density of the saturated fraction is the lowest. In the process of bitumen aging, the chain breaking reaction mainly occurs in saturated fractions, forming small molecules with a low boiling point and being easy to volatilize. However, due to the relatively small content of the saturated fraction, the amount of volatilization is not obvious, so the content of saturated fraction is basically the same. In the aging process, the aromatics were converted into resin and the resin was transferred to asphaltene, so the aromatic content decreased while the asphaltenes content increased. As an intermediate product, the content of resin was affected by the rate of formation and consumption, and it then decreased slightly. For asphaltene, the content of ultraviolet radiation in aqueous medium is higher than that in pure ultraviolet radiation, especially in acid medium. For aromatic fraction, in the four ultraviolet radiation modes, the aromatic content in the acidic medium mode is the smallest, which indicates that the acidic solution has a more pronounced effect on the aromatic aging of the bitumen. It is due to the dissolution and ionization of the acidic species in the bitumen and the chemical reaction with the reactive groups in the bitumen and H + , , and NO in the acid solution.
Four-Component Analysis
The contents of four components of virgin bitumen and SBS modified bitumen after short-term aging, UV aging, and UV aging of different moisture media are shown in Figures 4 and 5. After UV aging, the changes of the four components of SBS modified bitumen and virgin bitumen are basically the same. Ultraviolet radiation results in the change of four components' content of bitumen, with the saturated content basically remaining the same, resin and aromatic content decreasing slightly, and the asphaltene content increased. Changes in the four components content of the bitumen will inevitably result in changes in the macroscopic properties of the bitumen. The results of the four-component analysis test are consistent with the results of the DSR test and the physical property test. Asphaltene is the heavy component with the largest molecular weight among the four components. During the aging process, some low molecular weight substances will undergo polycondensation and migration to heavy components, so the asphaltene content changes the most. The relative density of the saturated fraction is the lowest. In the process of bitumen aging, the chain breaking reaction mainly occurs in saturated fractions, forming small molecules with a low boiling point and being easy to volatilize. However, due to the relatively small content of the saturated fraction, the amount of volatilization is not obvious, so the content of saturated fraction is basically the same. In the aging process, the aromatics were converted into resin and the resin was transferred to asphaltene, so the aromatic content decreased while the asphaltenes content increased. As an intermediate product, the content of resin was affected by the rate of formation and consumption, and it then decreased slightly. For asphaltene, the content of ultraviolet radiation in aqueous medium is higher than that in pure ultraviolet radiation, especially in acid medium. For aromatic fraction, in the four ultraviolet radiation modes, the aromatic content in the acidic medium mode is the smallest, which indicates that the acidic solution has a more pronounced effect on the aromatic aging of the bitumen. It is due to the dissolution and ionization of the acidic species in the bitumen and the chemical reaction with the reactive groups in the bitumen and H + , SO 2− 4 , and NO − 3 in the acid solution.
Functional Group Characteristics
The infrared spectra of virgin bitumen and SBS modified bitumen after short term aging, UV aging, and UV aging of different moisture media are shown in Figure 6 and Figure 7. Carbonyl (C=O) and sulfoxide (S=O) can be used as a symbol of bitumen aging [24,30]. Under ultraviolet radiation, the single bond C-O in bitumen absorbs ultraviolet and become from ground state to excited state
Functional Group Characteristics
The infrared spectra of virgin bitumen and SBS modified bitumen after short term aging, UV aging, and UV aging of different moisture media are shown in Figure 6 and Figure 7. Carbonyl (C=O) and sulfoxide (S=O) can be used as a symbol of bitumen aging [24,30]. Under ultraviolet radiation, the single bond C-O in bitumen absorbs ultraviolet and become from ground state to excited state
Functional Group Characteristics
The infrared spectra of virgin bitumen and SBS modified bitumen after short term aging, UV aging, and UV aging of different moisture media are shown in Figures 6 and 7. Carbonyl (C=O) and sulfoxide (S=O) can be used as a symbol of bitumen aging [24,30]. Under ultraviolet radiation, the single bond C-O in bitumen absorbs ultraviolet and become from ground state to excited state and produces active free radicals. These free radicals are so active that they easily react with oxygen to form C=O. It is generally believed that the carbonyl group content increases and the carbonyl absorption peak intensity increases after UV aging. Figures 6 and 7 show that the characteristic peak of carbonyl group is at 1697 cm −1 . However, the carbonyl characteristic peaks of the virgin bitumen and SBS modified bitumen are not obvious after ultraviolet radiation, which may be due to the fact that the ultraviolet radiation time is too short to cause the severe photoaging of bitumen. Generally, the carbonyl index and sulfoxide index can be used to evaluate the aging behavior of bitumen. However, the change of the characteristic functional group absorption peak in this test is relatively small. If the test error is not taken into account, then there also will be a big error in the calculation of the absorption peak of carbonyl and sulfoxide groups. Therefore, the characteristic functional group index is not used to evaluate the aging behavior of bitumen in this paper. The characteristic sulfoxide functional group appears at 1030 cm −1 . The sulfoxide absorption peak of virgin bitumen is slightly stronger than that of the SBS modified bitumen, which indicates that the aging of virgin bitumen is more serious. The intensity of the sulfoxide-based absorption peaks of bitumen in different environments change lightly, and the intensity ranks as UV < UV + Water < UV + Acid or UV + Salt. 966 cm −1 and 699 cm −1 are the bending vibration absorption peaks of C=C double bond in polybutadiene segment (PB) and C-H in polystyrene segment (PS) benzene ring, respectively, under ultraviolet radiation. The intensity of the two-absorption peak is slightly decreased, especially under the condition of UV + Acid. This indicates that the C=C double bond of the PB segment and the C-H bond of benzene ring of PS segment are broken under ultraviolet radiation, resulting in the destruction of the original network crosslinking structure of SBS in modified bitumen, and which is the main reason for the degradation of SBS. and produces active free radicals. These free radicals are so active that they easily react with oxygen to form C=O. It is generally believed that the carbonyl group content increases and the carbonyl absorption peak intensity increases after UV aging. Figure 6 and Figure 7 show that the characteristic peak of carbonyl group is at 1697 cm −1 . However, the carbonyl characteristic peaks of the virgin bitumen and SBS modified bitumen are not obvious after ultraviolet radiation, which may be due to the fact that the ultraviolet radiation time is too short to cause the severe photoaging of bitumen. Generally, the carbonyl index and sulfoxide index can be used to evaluate the aging behavior of bitumen. However, the change of the characteristic functional group absorption peak in this test is relatively small. If the test error is not taken into account, then there also will be a big error in the calculation of the absorption peak of carbonyl and sulfoxide groups. Therefore, the characteristic functional group index is not used to evaluate the aging behavior of bitumen in this paper. The characteristic sulfoxide functional group appears at 1030 cm −1 . The sulfoxide absorption peak of virgin bitumen is slightly stronger than that of the SBS modified bitumen, which indicates that the aging of virgin bitumen is more serious. The intensity of the sulfoxide-based absorption peaks of bitumen in different environments change lightly, and the intensity ranks as UV < UV + Water < UV + Acid or UV + Salt. 966 cm −1 and 699 cm −1 are the bending vibration absorption peaks of C=C double bond in polybutadiene segment (PB) and C-H in polystyrene segment (PS) benzene ring, respectively, under ultraviolet radiation. The intensity of the two-absorption peak is slightly decreased, especially under the condition of UV + Acid. This indicates that the C=C double bond of the PB segment and the C-H bond of benzene ring of PS segment are broken under ultraviolet radiation, resulting in the destruction of the original network crosslinking structure of SBS in modified bitumen, and which is the main reason for the degradation of SBS.
Apparent Morphology
SEM analysed the structural characteristics of virgin bitumen and SBS modified bitumen after short term aging, UV aging, and UV aging in different aqueous media. The experimental results were shown in Figure 8 and Figure 9 (magnification 300 times). Ultraviolet radiation can arouse cracking on bitumen surface, and the SBS modified bitumen crack is tidier than that of virgin bitumen under the same conditions. The cracking of the bitumen surface can be used to characterize the degree of aging. The more obvious the fragmentation of bitumen surface cracking, the more serious the aging degree. Under ultraviolet aging of the moisture medium, the cracking of the bitumen surface is further aggravated, and the surface of the virgin bitumen is more severely fragmented. The bitumen is subjected to acid corrosion in the acidic medium, the crack width of the virgin bitumen is increased, and the corners of the fragments are warped. The surface cracking of SBS modified bitumen is increased and irregular, but the crack is small. In the salt medium, the surface of the virgin bitumen is covered with salt and the crack is deep and irregular after UV aging. Generally speaking, the existence of dispersed SBS phase improves the engineering properties at the high and low temperature of bitumen. However, the surface cracking imply that virgin bitumen and SBS modified bitumen gradually become brittle at a lower temperature after UV aging [31]. In a word, the aging resistance of SBS modified bitumen is better than that of virgin bitumen in terms of surface cracking due to UV aging. Under the four environments, the bitumen aging cracking ranks as UV < UV + Water < UV + Acid or UV + Salt, which was consistent with the results of the microstructure and macro performance of bitumen after UV aging.
Apparent Morphology
SEM analysed the structural characteristics of virgin bitumen and SBS modified bitumen after short term aging, UV aging, and UV aging in different aqueous media. The experimental results were shown in Figures 8 and 9 (magnification 300 times). Ultraviolet radiation can arouse cracking on bitumen surface, and the SBS modified bitumen crack is tidier than that of virgin bitumen under the same conditions. The cracking of the bitumen surface can be used to characterize the degree of aging. The more obvious the fragmentation of bitumen surface cracking, the more serious the aging degree. Under ultraviolet aging of the moisture medium, the cracking of the bitumen surface is further aggravated, and the surface of the virgin bitumen is more severely fragmented. The bitumen is subjected to acid corrosion in the acidic medium, the crack width of the virgin bitumen is increased, and the corners of the fragments are warped. The surface cracking of SBS modified bitumen is increased and irregular, but the crack is small. In the salt medium, the surface of the virgin bitumen is covered with salt and the crack is deep and irregular after UV aging. Generally speaking, the existence of dispersed SBS phase improves the engineering properties at the high and low temperature of bitumen. However, the surface cracking imply that virgin bitumen and SBS modified bitumen gradually become brittle at a lower temperature after UV aging [31]. In a word, the aging resistance of SBS modified bitumen is better than that of virgin bitumen in terms of surface cracking due to UV aging. Under the four environments, the bitumen aging cracking ranks as UV < UV + Water < UV + Acid or UV + Salt, which was consistent with the results of the microstructure and macro performance of bitumen after UV aging.
Summary and Conclusions
In this paper, macroscopic performance, microstructure, and apparent morphology test method were used to investigate the UV aging of virgin bitumen and SBS modified bitumen. Based on the test result and statistical analysis of bitumen at different aging states, the following conclusions can be drawn: (1) The aging of bitumen pavement during service life is not only affected by high temperature and ultraviolet radiation. The influence of environmental factors, such as aqueous solution, on aging should not be ignored. When subjected to water, acid rain, or chlorine salt corrosion, the asphalt pavement has a higher level of aging, especially in acidic or salt environments.
(2) The aging laws of virgin bitumen and SBS modified bitumen by ultraviolet radiation are basically the same. The anti-photoaging performance of SBS is better than that of virgin bitumen.
(3) The FTIR test showed that under UV radiation, the absorption peaks of carbonyl(C=O) and sulfoxide groups(S=O) of bitumen-based increased, and the absorption peaks of PB and PS segments of SBS decreased. It indicates that the nature of SBS modified bitumen aging is the change in the molecular structure of the bitumen-based and the degradation of SBS.
(4) After UV aging, macroscopically, the bitumen showed a decrease in penetration and an increase in softening point and rutting factor. Microscopically, the light components of bitumen migrate to heavy components, resulting in a decrease in aromatic and gelatinous content, an increase in asphaltene content, and cracking on the bitumen surface.
|
v3-fos-license
|
2024-03-25T06:17:59.015Z
|
2024-03-23T00:00:00.000
|
268665754
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.joc.3c02349",
"pdf_hash": "47be29389746ad818ac7f8eab8a39dc961f537f1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46490",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "e4e634bfac75554c883aaf08d970cfb3e8b1cf74",
"year": 2024
}
|
pes2o/s2orc
|
A Neutral PCNHCP Co(I)–Me Pincer Complex as a Catalyst for N-Allylic Isomerization with a Broad Substrate Scope
Earth-abundant-metal catalyzed double bond transposition offers a sustainable and atom-economical route toward the synthesis of internal alkenes. With an emphasis specifically on internal olefins and ethers, the isomerization of allylic amines has been particularly under represented in the literature. Herein, we report an efficient methodology for the selective isomerization of N-allylic organic compounds, including amines, amides, and imines. The reaction is catalyzed by a neutral PCNHCP cobalt(I) pincer complex and proceeds via a π-allyl mechanism. The isomerization occurs readily at 80–90 °C, and it is compatible with a wide variety of functional groups. The in situ formed enamines could additionally be used for a one-pot inverse-electron-demand Diels–Alder reaction to furnish a series of diversely substituted heterobiaryls, which is further discussed in this report.
■ INTRODUCTION
Alkenes are ubiquitous in a wide variety of natural and industrial products.The selective transposition of terminal carbon−carbon bonds to internal ones has been investigated for decades, mainly with precious metal catalysts (e.g., Pd, Ru, and Ir). 1 Recently, significant efforts have been made to replace these precious metals with their earth-abundant congeners, such as iron, cobalt, and nickel. 2 Using these metals has resulted in hallmark examples of earth-abundantmetal-catalyzed double bond migration (Figure 1), where the emphasis has mainly been on olefins and allyl ethers.1a,d,2a,3 By contrast, double bond migration from an N-allyl motif has been underrepresented in the literature despite its presence in a variety of natural products, agrochemicals, and industrially relevant compounds.3a,4 The isomerization of N-allylic framework enables a selective and atom-economical pathway to highly polarized N-(1propenyl) or generally N-vinyl intermediates, 3a whose enamines, enamides, and aza-dienes are commonly used in cycloadditions, 5 cyclopropanations, 6 heterocycle synthesis, 7 halofunctionalizations, 8 and transition-metal-catalyzed C−C bond-forming reactions. 9In addition, the transition-metal catalyzed tandem isomerization of N-allylic double bonds followed by functionalization of the in situ formed N-vinyl intermediate offers access to functionalized molecules that would be difficult to synthesize otherwise. 10Furthermore, the added benefit of N-allyl isomerization is that in these reactions the regio-and stereoselectivity are often well-defined.3a,11 Because of their synthetic utility, Otsuka and co-workers reported in the 1980s the first Co(I)-hydride catalyzed isomerization of two allylamines to their corresponding trans- enamines. 12Stille, on the other hand, demonstrated the ruthenium-, rhodium-, and iron catalyzed isomerization of allylamides to enamides, although different reaction conditions were necessary for each metal. 13Later, the scope and stereoselectivity were greatly improved by Krompiec and coworkers who used noble-metal containing catalysts.3a,14 Following these early examples, several recent studies have reported the stereoselective isomerization of allyl amines and allyl amides.1a,4a,b,15 Most notably, Trost and co-workers reported the isomerization of highly substituted N-allylamides to Z-enamides by utilizing a cationic ruthenium catalyst, 16 while Schoenebeck and co-workers used an air-stable Pd(I) dimer for the E-selective synthesis of enamides. 17Recently, a new strategy by Matsunaga and co-workers was reported, who elegantly demonstrated that poly-substituted enamides could be synthesized via Co-catalyzed hydrogen atom transfermediated alkene isomerization. 18Besides these hallmark examples, there are only a few studies that report the transition-metal-catalyzed isomerization of allyl imines to azadienes, 19 which is an interesting building block for the cycloaddition reactions.Overall, most of these reactions are catalyzed by precious metals, leaving ample opportunity to develop earth-abundant alternatives.Furthermore, no universal strategy has been developed that allows the isomerization of general N-allylic substrates such as allylamines, allylamides, and allylimines with a single catalyst, again leaving ample chemical space for such protocols to be developed.
Recently, our group reported efficient alkene isomerization catalyzed by well-defined iron(0) and cobalt(I) PC NHC P pincer complexes that proceeded either by an alkyl-(Fe) or allyl-type (Co) mechanism (Figure 1). 20Building upon the success of these isomerization catalysts, herein we report that the cobalt PC NHC P pincer complex [(PC NHC P)CoCH 3 ] (Co−Me) is an excellent catalyst for the selective isomerization of allylamines, allylamides, allyl-aldimines, and allyl-ketimines (Figure 1). 21In addition, the resulting enamines were used in a one-pot sequential procedure for the inverse-electron-demand Diels− Alder reaction that enables facile synthesis of diversely substituted heterobiaryls, which is further discussed in this report.
■ RESULTS AND DISCUSSION
Given our previous experiences in alkene isomerization and the availability of well-defined cobalt(I) PC NHC P pincer complexes, we sought to establish if [(PC NHC P)CoMe] (Co−Me) could efficiently isomerize N-allylic substrates.To the best of our knowledge, there has been only one report on cobaltcatalyzed isomerization of allylamines, 12 while no universal protocol is available to isomerize all three sets of N-allylic substates.We started our investigation into N-allylic isomerization with Co−Me as the catalyst (5 mol %), N,Ndibenzylallylamine as a model substrate, and toluene-d 8 as the solvent at 80 °C.Gratifyingly, the allylamine completely isomerized to the corresponding enamine with exceptional stereoselectivity (E/Z: 37:1).A short optimization protocol revealed that the resulting enamine could also be obtained in excellent yields with 2 mol % of catalyst (Table S1).Using the optimized conditions, we explored a diverse set of electronically or sterically differentiated allylamines (Table 1).As evident from Table 1, allylamines substituted with alkyl, aryl, The Journal of Organic Chemistry cycloalkyl, heterocycles, diallyl, and triallyl substituents are all well tolerated, and their isomerization proceeded smoothly with excellent stereoselectivity.Sterically encumbered substrates such as N,N-dicyclohexyl or N,N-diphenyl allylamines, or a combination thereof, all provided the corresponding enamines (5f−5h) in excellent yield, although slightly higher temperatures were required for isomerization of N,N-diphenyl allylamine.Interestingly, heteroatom-substituted allylamines were also well tolerated (5j−5l), and the isomerization proceeded with complete conversion, although the isolation of the resulting enamine resulted in somewhat moderate yields.Interestingly, the methodology reported herein is not limited to single-bond isomerization.The neutral Co−Me complex is also an excellent catalyst for multiple bond isomerization.While at 50 °C single-bond isomerization was observed, at 80 °C selective two-bond isomerization products were obtained (5m and 5n, Scheme 1).
Besides enamines, we were also interested if Co−Me could be used to isomerize N-allylamides, since the resulting enamides are extensively utilized in various organic transformations.5c,d,22 Although several methods are available for their synthesis, 23 transition-metal-catalyzed isomerization is one of the most convenient and atom-economical routes.4a,b,16−18 Consequently, we set out to test the isomerization of N-allylamides with our previous established reaction protocol (Table 1).Gratifyingly, the isomerization of N-allyl-N-methylbenzamide proceeded readily at 80 °C and produced the corresponding enamide with excellent stereoselectivity (Table 1; 6a).Changing the nature of the benzamide to include electron-donating (e.g., −Me, −OMe, or −NMe 2 ) or electron-withdrawing substituents (e.g., −CN or −CF 3 ) did not affect the yield nor stereoselectivity of the reaction (Table 1; 6b−6f).Likewise, changing the substituent pattern at the arene-ring did not affect yield or stereoselectivity (Table 1; 6g and 6h).To investigate how steric parameters influence the isomerization reaction, we modified the N-methyl substituent to either benzyl, phenyl, or cyclohexyl.In all cases, the corresponding enamides (6i−6k) were obtained in good yields (>94%) with moderate to excellent E-stereoselectivity (E/Z ≥ 6:1).Even N-allyl-N-methyl-picolinamide could be isomerized with excellent E-selectivity (Table 1; 6l E/Z: 20.4:1).These results demonstrate that our recently reported Co−Me complex is an excellent catalyst for the stereoselective isomerization of N-allylamines and N-allylamides.
Driven by the successful isomerization of these substrates, we sought to provide easy access to 1,3-azadienes via the isomerization of N-allylimines.While useful substrates in organic syntheses, accessing the 1,3-azadiene motif is difficult and frequently relies on base-mediated isomerization of allylimines that proceeds with poor yields and selectivity. 24ecently, a different route was reported by Trost and coworkers who accessed the azadiene via a palladium-catalyzed The Journal of Organic Chemistry oxidative allylic alkylation. 25To the best of our knowledge, there has been no report on first-row transition-metal-catalyzed one-bond isomerization of N-allylimines.
To test the isomerization of N-allylimines, we selected phenyl aldimine as a benchmark substrate with Co−Me as a catalyst.Using the optimized reaction conditions (vide supra), the corresponding 2-aza-1,3-diene (7a) was obtained in a 94% yield.Compared to the isomerization of N-allylamines and amides, E-stereoselectivity is only moderate (E/Z = 2.2:1).Further exploring the substrate scope revealed that electronically differentiated phenyl aldimines are isomerized efficiently, where both electron-donating (e.g., −Me, −OMe, and −NMe 2 ) or electron-withdrawing (e.g., −CN or −CF 3 ) substituents are well tolerated (Table 2; 7b−7f).Furthermore, ortho substitution on the phenyl ring (7g) did not impede the transformation.Similarly, the trisubstituted aryl (7j) and 1naphthyl (7k) allylimines were also tolerated, albeit longer reaction times were necessary to obtain complete conversion of the substrate.To our delight, nonaromatic (7l) and heteroaromatic (7h, 7i) allylimines were efficiently isomerized to the corresponding 2-aza-1,3-dienes in good to moderate yields.Finally, we were also able to extend this methodology to include N-allylketimines.Akin to their imine congeners, similar yields and stereoselectivities were obtained (Table 2; 8a−8l), although slightly higher temperatures (90 °C) were required to complete the reaction.Finally, to demonstrate the applicability and scalability of the herein reported N-allylic isomerization protocol, the gram-scale synthesis of 6a and 8l was demonstrated (Scheme 2).
Overall, the methodology reported herein is applicable for the isomerization of allyl (i) amines, (ii) amides, and (iii) imines, which can also be extended to one-and multiple-bond isomerization strategies.Although a wide scope of substrates are tolerated (vide supra), any substitution on the allylfragment results in a complete loss of catalytic activity, most likely due to steric crowding around the metal center.20a Current research is centered around enabling the isomerization of N-allylic di-, tri-, and tetra-substituted alkenes that bear great synthetic relevance.
Considering the importance of 2-aza-1,3-dienes as substrates in organic chemistry, the isomerized products can be readily converted into other six-membered heterocycles 25 via an inverse-electron-demand Diels−Alder cycloaddition (Scheme 3A).The one-step formation of pyridine-containing motifs would be a valuable asset in the synthesis of natural products and pharmaceuticals.We performed this cycloaddition with electron-deficient 2-aza-1,3-diene 7a and ethyl 3-(pyrrolidin-1yl)acrylate ( 11) in the presence of MgBr 2 •Et 2 O as a promotor.Subsequent oxidation with catalytic amounts of Pd/C (23 mol %; 5 wt %, based on metal) resulted in the formation of various heterobiaryls as single regioisomers in low to moderate yield (9a−9c).Note that in the study by Trost and co-workers, similar yields were obtained for a multistep synthesis.Realizing that enamine coupling partners could also be accessed via our isomerization protocol, we envisioned developing a one-pot procedure where both the 2-aza-1,3-diene and the enamine starting materials are obtained via our cobalt-catalyzed isomerization protocol.To test the one-pot cycloaddition, Nallyl morpholine and phenyl aldimine were mixed in a J-Young tube, and the reaction was heated at 80 °C with 5 mol % Co− Me catalyst.Unfortunately, only the phenyl aldimine was completely converted to the 2-aza-1,3-diene, with less than 5% conversion of the N-allylamine.Even increasing the reaction time and catalyst loading did not improve the conversion of Nallylamine to the corresponding enamine.Most likely, strong coordination of 2-aza-1,3-diene to the cobalt metal centers prevents further isomerization of the N-allylamine.Indeed, when first N-allyl morpholine was added to a mixture of Co− Me in toluene-d 8 , complete isomerization was observed, as reported in Table 1.Subsequent addition of the Nallylaldimine resulted in quantitative formation of 2-aza-1,3diene, as judged by 1 H NMR spectroscopy.With both substrates now available through cobalt-catalyzed isomerization, a sequential one-pot procedure was developed for the synthesis of diversely substituted 2-phenylpyridines (Scheme 3B).
To illustrate, in a one-pot procedure, N-allyl morpholine was isomerized with a 5 mol % Co−Me catalyst at 80 °C.Subsequent addition of the aryl aldimine to the same reaction mixture resulted in the formation of the 2-aza-1,3-diene product.To facilitate the Diels−Alder reaction, MgBr 2 •Et 2 O was added, followed by catalytic amounts of Pd/C (23 mol %; 5 wt %, based on metal), to furnish the desired pyridine-biaryl as a single regio-isomer as the product (Figures S150−151).This methodology is widely applicable and can be used to access both electron-rich and electron-poor 2-phenylpyridines (10a−10c) in moderate to excellent yields (Scheme 3B).
Mechanistically, our previous studies have shown that the isomerization reaction occurs via a π-allyl mechanism.20b We envisioned that such a mechanism is also operable for the isomerization of N-allyllic substrates to generate the respective N-vinyl products.However, in the case of N-allylimines, two intermediates are possible during the isomerization process: (i) an all-carbon-π-allyl Co(III) complex and (ii) a 2-aza-π-allyl Co(III) complex that are most likely in equilibrium.Our experiments indicate that for the N-allylimines, 2-aza-1,3dienes are the sole product of the reactions with no trace of the 1-azadienes, which suggests that the reaction follows through the all-carbon-π-allyl Co(III) intermediate.
■ CONCLUSIONS
In conclusion, we have established the versatility of the neutral Co(I)−Me complex as an efficient catalyst for the isomerization of N-allyl substrates.The isomerization of N-allylamines, N-allylamides, and N-allylimines exhibits excellent Estereoselectivity, occurs under moderate conditions, and is compatible with a wide variety of functional groups that include electron-donating, electron-withdrawing, and heteroaromatics substituents.Furthermore, the Co(I)−Me-catalyzed isomerization protocol could be extended to a sequential onepot inverse-electron-demand Diels−Alder reaction to give access to diversely substituted 2-phenylpyridines.To the best The Journal of Organic Chemistry of our knowledge, the methodology reported herein represents the first example of a single catalyst that is able to tackle the isomerization of any kind of N-allyllic substrate under mild reaction conditions.Current efforts are directed to develop Zselective protocols and to enable the isomerization of di-and trisubstituted alkenes, which is currently problematic.
Table 1 .
Isomerization of N-Allylamines and N-Allylamides Catalyzed a Neutral Co(I)−Me Catalyst a Reactions were performed with 2−5 mol % catalyst, 0.15 mmol substrate, in 400 μL of toluene-d 8 for 6−24 h at 80−90 °C.Yields and stereoselectivity (E vs Z) were determined by 1 H and 13 C NMR spectroscopy.
Scheme 1 .
Scheme 1. Selective One and Two Bond Isomerization of Terminal Alkene Catalyzed by Co(I)−Me Complex
|
v3-fos-license
|
2021-07-26T00:06:34.362Z
|
2021-06-03T00:00:00.000
|
236292246
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-0825/10/6/118/pdf?version=1622701572",
"pdf_hash": "24dc0dbb53bdf380f5a8313804db016c61ad1b05",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46491",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "adef82f144d77fc45dd1a8e8f6abca3f2404ca6b",
"year": 2021
}
|
pes2o/s2orc
|
Design and Experimental Validation of a Fuzzy Cascade Controller for a Zero-Power Permanent Magnetic Suspension System with Variable Flux Path Control
: Magnetic suspension technology has been a promising method to achieve contactless movement, and its advantages are smooth motion, no wear, no noise and low maintenance. In previous studies, the suspension force was mainly controlled by the current in the coils, which can lead to energy loss. To solve the problem of energy loss, we have proposed a novel zero-power permanent magnetic suspension system with variable flux path control (ZPPMSS-VFPC); moreover, the interference suppression and response of the ZPPMSS-VFPC need to be further investigated. This paper aims to improve the robustness and decrease the response time for the ZPPMSS-VFPC; as a result, a fuzzy cascade controller composed of a fuzzy controller and a cascade controller is designed and applied, in which the investigated fuzzy cascade control methods include the position loop fuzzy cascade control (PLFCC) and angle loop fuzzy cascade control (ALFCC). The structure and the working principle of the proposed ZPPMSS-VFPC are introduced, and the theoretical modeling and the fuzzy cascade controller design of the system are exhibited. An experimental setup is established to validate the simulation results and to investigate the control effect of the designed controller. The experimental results demonstrate that the response times of the fuzzy cascade controller at the displacement disturbance and the force disturbance are 0.5 s and 0.6 s faster than those of the cascade control, respectively. Furthermore, the control effect of the PLFCC is better than that of the ALFCC. Overall, the fuzzy cascade controller not only has the characteristics of strong adaptability but also has the characteristics of easy adjustment parameters, which can be applied to the complex magnetic suspension system.
Introduction
Magnetic suspension is a popular technology that applies magnetic force to the target object to realize non-contact motion between objects. With the development of magnetic suspension technology, magnetic suspension technology has been widely applied to various fields, such as bearings [1,2], trains [3,4] and compressors [5]. Furthermore, magnetic suspension technology is also used in harvesting energy technology, in which the magnetic suspension wind turbine [6,7] uses the magnetic force to cause the blades of wind power generation to be suspended in the air gap, and the electrical energy generated by cutting magnetic induction lines is stored in the battery. Currently, magnetic suspension technology is developing rapidly, and mainly includes electromagnetic technology [8,9], permanent magnet (PM) technology [10,11] and hybrid magnet technology [12,13].
Compared with ordinary trains, the maglev train has the advantages of faster speed, less noise and less harm to the environment; hence, the magnetic force technology that is fractional-order Proportional-Integral-Derivative controller to achieve the tradeoff. To verify the robustness of the ZPPMSS-VFPC under external disturbances, Zhou et al. [30] adopted the two controllers to investigate its suspension characteristics, in which the two controllers are a cascade PD controller and a parallel PD controller. The traditional PID controller has the advantages of simple structure, good stability, reliable operation and convenient adjustment. However, the proposed ZPPMSS-VFPC has the characteristics of strong nonlinear and strong interference; the traditional PID controller cannot adapt to the magnetic suspension systems, which can cause the system to overshoot greatly and can cause long response times.
Therefore, to obtain better dynamic performance and improve the control effect of the proposed novel ZPPMSS-VFPC, this paper designs a fuzzy cascade controller to investigate the dynamic performance of the magnetic suspension system. The designed fuzzy cascade controller consists of a fuzzy controller and a cascade controller; therefore, the control effect of the designed fuzzy cascade controller is better than that of the traditional PID controller, which has the characteristics of strong adaptability of fuzzy control and easy adjustment parameters of PID control. Thus, the content of this paper is organized as follows: Section 2 introduces the structure and working principle of the ZPPMSS-VFPC. In Section 3, the theoretical model and the fuzzy cascade controller are derived and designed, respectively. In Section 4, the control results of the two controllers on the system are obtained by the simulation. The experiments are carried out to verify the fuzzy cascade controllers in Section 5, and the comparison results between the fuzzy cascade controller and cascade controller are illustrated. The conclusion is presented in Section 6.
Structure
The proposed ZPPMSS-VFPC can realize the non-contact between the transportation device and the track. The 3D model of ZPPMSS-VFPC is illustrated, as shown in Figure 1a. It can be found that the device is driven by the servo motor, and there is a ferromagnetic plate between the frames, which is utilized to eliminate the effect between the two PMs. Figure 1b shows the structure of the zero-power permanent magnetic suspension device; it can be seen that the ZPPMSS-VFPC mainly consists of two PMs, four F-shape cores, a displacement sensor, a suspended object, a fixed object, frames and a bearing. The two PMs are connected with the servo motor using a coupling, and the F-shape cores and the displacement sensor are fixed on the frame. Furthermore, one PM, the suspended object and the two F-shape cores are mounted on the front of the frame, and the other PM, the fixed object and the two F-shape cores are symmetrically mounted on the back of the frame, in which the magnetic pole of two PMs is staggered 90 degrees. Meanwhile, the suspended object can move in a single degree of freedom through the coupling.
Actuators 2021, 10, x FOR PEER REVIEW 3 of 17 analyzed the tradeoff between the simplicity of the controller structure and the performance of the closed-loop system for the active magnetic bearing and proposed the design of a fractional-order Proportional-Integral-Derivative controller to achieve the tradeoff.
To verify the robustness of the ZPPMSS-VFPC under external disturbances, Zhou et al. [30] adopted the two controllers to investigate its suspension characteristics, in which the two controllers are a cascade PD controller and a parallel PD controller. The traditional PID controller has the advantages of simple structure, good stability, reliable operation and convenient adjustment. However, the proposed ZPPMSS-VFPC has the characteristics of strong nonlinear and strong interference; the traditional PID controller cannot adapt to the magnetic suspension systems, which can cause the system to overshoot greatly and can cause long response times. Therefore, to obtain better dynamic performance and improve the control effect of the proposed novel ZPPMSS-VFPC, this paper designs a fuzzy cascade controller to investigate the dynamic performance of the magnetic suspension system. The designed fuzzy cascade controller consists of a fuzzy controller and a cascade controller; therefore, the control effect of the designed fuzzy cascade controller is better than that of the traditional PID controller, which has the characteristics of strong adaptability of fuzzy control and easy adjustment parameters of PID control. Thus, the content of this paper is organized as follows: Section 2 introduces the structure and working principle of the ZPPMSS-VFPC. In Section 3, the theoretical model and the fuzzy cascade controller are derived and designed, respectively. In Section 4, the control results of the two controllers on the system are obtained by the simulation. The experiments are carried out to verify the fuzzy cascade controllers in Section 5, and the comparison results between the fuzzy cascade controller and cascade controller are illustrated. The conclusion is presented in Section 6.
Structure
The proposed ZPPMSS-VFPC can realize the non-contact between the transportation device and the track. The 3D model of ZPPMSS-VFPC is illustrated, as shown in Figure 1a. It can be found that the device is driven by the servo motor, and there is a ferromagnetic plate between the frames, which is utilized to eliminate the effect between the two PMs. Figure 1b shows the structure of the zero-power permanent magnetic suspension device; it can be seen that the ZPPMSS-VFPC mainly consists of two PMs, four F-shape cores, a displacement sensor, a suspended object, a fixed object, frames and a bearing. The two PMs are connected with the servo motor using a coupling, and the F-shape cores and the displacement sensor are fixed on the frame. Furthermore, one PM, the suspended object and the two F-shape cores are mounted on the front of the frame, and the other PM, the fixed object and the two F-shape cores are symmetrically mounted on the back of the frame, in which the magnetic pole of two PMs is staggered 90 degrees. Meanwhile, the suspended object can move in a single degree of freedom through the coupling.
Suspension Principle
A schematic diagram of the suspension principle is illustrated in Figure 2; the suspension force of the system is changed by changing the rotation angle of the PM. When the rotation angle of the PM ring is zero, the magnetic flux starts from the N pole of the PM, passes through the F-shape core and returns to the S pole of the PM. Hence, there is no suspension force in the system. When the PM turns a certain angle, a part of the magnetic flux starts from the N pole of the PM ring, passes through the F-shape core, suspended object, and the other F-shape core, and returns to the S pole of the PM. Therefore, the F-shape core can generate the suspension force, and when the gravity of the suspended object is equal to the suspension force, the suspended object can be suspended. Overall, the suspension force is changed by changing the number of magnetic fluxes passing through the suspended object. The more magnetic flux passes through the suspended object, the greater the suspension force that can be generated.
Zero Power Principle
According to the analysis of Section 2.2.1, the suspended object can be suspended by the suspension force generated by the F-shape cores, and it can be seen from Figure 1 that the suspension force is directly transmitted to the frame through the F-shape cores mounted on the frame, and the PMs and the servo motor are not affected by the suspended object; therefore, the servo motor has no energy consumption in the stable suspension state. However, because of the unique structure of the F-shape core, there is the torque on the PM, which can make the device achieve the quasi zero power characteristics. Hence, the torque of the PM is eliminated by two PMs with poles 90 degrees apart in the proposed device, which can cause the suspension device to have zero power characteristics. Figure 2; the suspension force of the system is changed by changing the rotation angle of the PM. When the rotation angle of the PM ring is zero, the magnetic flux starts from the N pole of the PM, passes through the F-shape core and returns to the S pole of the PM. Hence, there is no suspension force in the system. When the PM turns a certain angle, a part of the magnetic flux starts from the N pole of the PM ring, passes through the F-shape core, suspended object, and the other F-shape core, and returns to the S pole of the PM. Therefore, the F-shape core can generate the suspension force, and when the gravity of the suspended object is equal to the suspension force, the suspended object can be suspended. Overall, the suspension force is changed by changing the number of magnetic fluxes passing through the suspended object. The more magnetic flux passes through the suspended object, the greater the suspension force that can be generated.
Suspension Principle
A schematic diagram of the suspension principle is illustrated in Figure 2; the suspension force of the system is changed by changing the rotation angle of the PM. When the rotation angle of the PM ring is zero, the magnetic flux starts from the N pole of the PM, passes through the F-shape core and returns to the S pole of the PM. Hence, there is no suspension force in the system. When the PM turns a certain angle, a part of the magnetic flux starts from the N pole of the PM ring, passes through the F-shape core, suspended object, and the other F-shape core, and returns to the S pole of the PM. Therefore, the F-shape core can generate the suspension force, and when the gravity of the suspended object is equal to the suspension force, the suspended object can be suspended. Overall, the suspension force is changed by changing the number of magnetic fluxes passing through the suspended object. The more magnetic flux passes through the suspended object, the greater the suspension force that can be generated.
Zero Power Principle
According to the analysis of Section 2.2.1, the suspended object can be suspended by the suspension force generated by the F-shape cores, and it can be seen from Figure 1 that the suspension force is directly transmitted to the frame through the F-shape cores mounted on the frame, and the PMs and the servo motor are not affected by the suspended object; therefore, the servo motor has no energy consumption in the stable suspension state. However, because of the unique structure of the F-shape core, there is the torque on the PM, which can make the device achieve the quasi zero power characteristics. Hence, the torque of the PM is eliminated by two PMs with poles 90 degrees apart in the proposed device, which can cause the suspension device to have zero power characteristics.
Zero Power Principle
According to the analysis of Section 2.2.1, the suspended object can be suspended by the suspension force generated by the F-shape cores, and it can be seen from Figure 1 that the suspension force is directly transmitted to the frame through the F-shape cores mounted on the frame, and the PMs and the servo motor are not affected by the suspended object; therefore, the servo motor has no energy consumption in the stable suspension state. However, because of the unique structure of the F-shape core, there is the torque on the PM, which can make the device achieve the quasi zero power characteristics. Hence, the torque of the PM is eliminated by two PMs with poles 90 degrees apart in the proposed device, which can cause the suspension device to have zero power characteristics.
Novelty of the Proposed Magnetic Suspension Device
According to the introduced structure and working principle, the novelties of the proposed magnetic suspension device are the method of the magnetic force change and zero power characteristic. In the designed magnetic suspension device, the rotation angle of the permanent magnet can control the number of magnetic fluxes passing through the suspended object, which is used to change the magnetic force of the magnetic suspension system. Furthermore, the magnetic force provided by the F-shape cores is used to balance the gravity of the suspended object, and the F-shape cores are mounted on the frame. Hence, the permanent magnet and the motor are not affected by the gravity of the suspended object; the designed magnetic suspension device has the zero power characteristic.
Theoretical Modeling
The schematic diagram of the suspension system and related parameters is shown in Figure 3, in which the PM and the suspended object are rotated and swung, respectively.
Novelty of the Proposed Magnetic Suspension Device
According to the introduced structure and working principle, the novelties of the proposed magnetic suspension device are the method of the magnetic force change and zero power characteristic. In the designed magnetic suspension device, the rotation angle of the permanent magnet can control the number of magnetic fluxes passing through the suspended object, which is used to change the magnetic force of the magnetic suspension system. Furthermore, the magnetic force provided by the F-shape cores is used to balance the gravity of the suspended object, and the F-shape cores are mounted on the frame. Hence, the permanent magnet and the motor are not affected by the gravity of the suspended object; the designed magnetic suspension device has the zero power characteristic.
Theoretical Modeling
The schematic diagram of the suspension system and related parameters is shown in Figure 3, in which the PM and the suspended object are rotated and swung, respectively. Figure 3, the torque of the PM and the suspension force are given as follows,
According to
where i = 1 and 2 represents the left core and right core and rear core of the front structure, respectively; i = 3 represents the rear core of the device. The torque borne from the PM is added to the motor shaft; hence, the total torque of the motor shaft is calculated, When the system is in the equilibrium position, the dynamic equations of the ZPPMS-VMFPS are obtained as follows: According to Figure 3, the torque of the PM and the suspension force are given as follows, where i = 1 and 2 represents the left core and right core and rear core of the front structure, respectively; i = 3 represents the rear core of the device. The torque borne from the PM is added to the motor shaft; hence, the total torque of the motor shaft is calculated, When the system is in the equilibrium position, the dynamic equations of the ZPPMS-VMFPS are obtained as follows: Actuators 2021, 10, 118 6 of 16 Next, substituting Equations (2) and (3) into Equations (4) and (5), the dynamic equations are calculated: The controlled object is in a stable suspension state; applying Taylor theorem, the linearized equations in equilibrium position can be obtained: where h 0 is the distance between the sensor and the suspended object in the horizontal position; θ 0 and d 0 are the angle and the distance between the sensor and the suspended object in the equilibrium position, respectively. According to Equations (8) and (9), the state space equation of the system can be given: where,
Controller Design
Based on the cascade control of the permanent magnetic system, the cascade controller consists of the position loop and angle loop; hence, two fuzzy cascade control methods are designed. In the position loop fuzzy cascade controller, the fuzzy rule is added to the position loop of the cascade control, and in the angle loop fuzzy cascade controller, the fuzzy rule is added to the angle loop of the cascade control. Therefore, it can be seen that the designed controllers not only have the characteristics of strong adaptability of fuzzy control but also have the characteristics of easy adjustment of cascade control. Figure 4a,b illustrate the position loop fuzzy cascade control (PLFCC) and the angle loop fuzzy cascade control (ALPCC), respectively.
Design of Domains and Membership Functions
Based on the traditional PID controller, the deviation e and change rate of deviation ec are used as the input of the PID controller, and the changes of the PID parameters ∆k p and ∆k d are used as the output of the PID controller. It can be seen from Figure 4a that the e 1 and ec 1 are the position deviation and change rate of position deviation, respectively, which are input to the fuzzy controller to modify the parameters of the P 1 D 1 controller. In addition, Figure 4b shows that e 2 and ec 2 are the angle deviation and change rate of angle deviation, respectively, which are input to the fuzzy controller to modify the parameters of the P 2 D 2 controller. Furthermore, the fuzzy subsets include negative big (NB), negative middle (NM), negative small (NS), zero (ZO), positive big (PB), positive middle (PM) and positive small (PS).
Controller Design
Based on the cascade control of the permanent magnetic system, the cascade controller consists of the position loop and angle loop; hence, two fuzzy cascade control methods are designed. In the position loop fuzzy cascade controller, the fuzzy rule is added to the position loop of the cascade control, and in the angle loop fuzzy cascade controller, the fuzzy rule is added to the angle loop of the cascade control. Therefore, it can be seen that the designed controllers not only have the characteristics of strong adaptability of fuzzy control but also have the characteristics of easy adjustment of cascade control. Figure 4a , 300], respectively. Moreover, when the intersection membership value of the two adjacent curves of the membership function is small, the sensitivity of the controller is high. The output of the controller in the proposed permanent magnet suspension system is used to control the current of the motor, which can control the rotation angle of the permanent magnet, and the rotation angle of the permanent magnet is used to control the magnetic force of the system; hence, the suspension system needs a faster response speed and a high sensitivity controller. According to the above analysis, since , 300], respectively. Moreover, when the intersection membership value of the two adjacent curves of the membership function is small, the sensitivity of the controller is high. The output of the controller in the proposed permanent magnet suspension system is used to control the current of the motor, which can control the rotation angle of the permanent magnet, and the rotation angle of the permanent magnet is used to control the magnetic force of the system; hence, the suspension system needs a faster response speed and a high sensitivity controller. According to the above analysis, since the intersection membership value of the two adjacent curves of the triangle-shaped membership function is smaller than that of other membership functions, this paper adopts the triangle-shaped membership function, and the calculated fuzzy value is transformed into the accurate value which can be recognized by the system using the maximum membership method. The triangle curve for the membership functions is expressed as follows: where the parameters a and c are the feet of the triangle-shaped membership function, and the parameter b is the top of the triangle-shaped membership function.
Fuzzy Control Rules
There are many types of fuzzy control rules. If there are many rules in the fuzzy control, the rules are detailed and the control effect is good; however, it is difficult to program and takes up a lot of memory. If there are few rules in the fuzzy control, the implementation of rules is convenient; however, few rules cause the control effect to not reach the expected effect. Therefore, the simplicity and control effect should be considered when the fuzzy control rules are selected; this paper uses seven types of fuzzy control rules. When the deviation e is large, to improve the response speed of the system, the proportion coefficient k p of the system should be increased, and the smaller differential coefficient k d should be taken to avoid the instantaneous increase of the adjustment initial error; when the deviation e is small, in order to prevent vibration and faster error changing, the smaller differential coefficient k d is adopted; when the deviation e is middling, the smaller proportional coefficient k p should be selected to reduce the overshoot, and when the change rate of deviation ec is small, the smaller differential value k d can improve the response time of the system. Therefore, the seven types of fuzzy control rules of ∆k p and ∆k d are shown in Tables 1 and 2.
Simulation Analysis
This section establishes the simulation block diagram with the fuzzy cascade controller to investigate the stability of the suspension system. According to the reference [31], the parameters of the suspension system are calculated and adopted, as shown in Table 3. Furthermore, the quantification factors e 1 , ec 1 , e 2 and ec 2 are 1, and the values of k p1 , k p2 , k d1 and k d2 are 1300, 200, 1200 and 0.3, respectively. Meanwhile, to analyze the performance of the two fuzzy cascade controllers, the simulation results need to be analyzed at the displacement disturbance 0.1 mm and the force disturbance 0.1 N, in which the rotation angle, input current and displacement are regarded as the performance indexes. According to the structure and principle of the fuzzy cascade controller, the displacement disturbance and the force disturbance are simulated in Matlab/Simulink environment. ALFCC are plotted in Figure 5a,b, respectively. Meanwhile, the position of the suspended object, the angle of the PM and the input current of the motor are recorded. It can be seen that the motor current and the magnet angle are increased as the displacement of the suspended object increases, and the position of the suspended object is 4.8 mm in the suspension state by the two fuzzy cascade control methods when the angle of the PM reaches a stable state again. However, the overshoot of the system using the PLFCC is greater compared with the ALFCC. Furthermore, it can be found that the changing current of the PLFCC is greater than that of the ALFCC, and the input current of the motor is less than 0.1 A in the suspension state; hence, zero power performance can be achieved. In addition, the response time of the ALFCC is nearly 0.13 s faster than that of the PLFCC.
Simulation Results of the Force Disturbance
The simulation results of the system that added the force disturbance at 3.2 s are shown in Figure 6, and the simulation results between 2.0 s and 4.0 s of the PLFCC and the ALFCC are illustrated in Figure 6a,b, respectively. It can be seen that the magnet angle and the motor current are increased with the increase of the suspended object, then the motor current is decreased when the suspended object reaches a new stable state, finally, the magnet angle and the motor current reach a stable state again. The position of the suspended object is 5.3 mm in the suspension state by the two fuzzy cascade control methods when the system is affected by the external force, and it takes 0.19 s faster for the PLFCC to reach the new suspension state than the ALFCC, and the changing current of
Simulation Results of the Force Disturbance
The simulation results of the system that added the force disturbance at 3.2 s are shown in Figure 6, and the simulation results between 2.0 s and 4.0 s of the PLFCC and the ALFCC are illustrated in Figure 6a,b, respectively. It can be seen that the magnet angle and the motor current are increased with the increase of the suspended object, then the motor current is decreased when the suspended object reaches a new stable state, finally, the magnet angle and the motor current reach a stable state again. The position of the suspended object is 5.3 mm in the suspension state by the two fuzzy cascade control methods when the system is affected by the external force, and it takes 0.19 s faster for the PLFCC to reach the new suspension state than the ALFCC, and the changing current of the PLFCC is greater than that of the ALFCC. cascade control.
Simulation Results of the Force Disturbance
The simulation results of the system that added the force disturbance at 3.2 s are shown in Figure 6, and the simulation results between 2.0 s and 4.0 s of the PLFCC and the ALFCC are illustrated in Figure 6a,b, respectively. It can be seen that the magnet angle and the motor current are increased with the increase of the suspended object, then the motor current is decreased when the suspended object reaches a new stable state, finally, the magnet angle and the motor current reach a stable state again. The position of the suspended object is 5.3 mm in the suspension state by the two fuzzy cascade control methods when the system is affected by the external force, and it takes 0.19 s faster for the PLFCC to reach the new suspension state than the ALFCC, and the changing current of the PLFCC is greater than that of the ALFCC.
Experiment Verification
The purpose of this section is to verify the performance of the proposed permanent magnetic system at the two fuzzy cascade control methods. The prototype is manufactured to simulate the single point permanent magnetic suspension system, and the experiment setup is set up to measure the displacement, rotation angle and input current of the suspension system. As shown in Figure 7, the prototype mainly consists of two PMs, four F-shape cores, a displacement sensor, a servo motor, a bearing and a suspended object. The suspension experiments are carried out to validate the simulation results; furthermore, the comparison experiments between the fuzzy cascade control and cascade control need to be carried out to analyze the effects of the fuzzy cascade control on the suspension performance of the system.
Experiment Verification
The purpose of this section is to verify the performance of the proposed permanent magnetic system at the two fuzzy cascade control methods. The prototype is manufactured to simulate the single point permanent magnetic suspension system, and the experiment setup is set up to measure the displacement, rotation angle and input current of the suspension system. As shown in Figure 7, the prototype mainly consists of two PMs, four F-shape cores, a displacement sensor, a servo motor, a bearing and a suspended object. The suspension experiments are carried out to validate the simulation results; furthermore, the comparison experiments between the fuzzy cascade control and cascade control need to be carried out to analyze the effects of the fuzzy cascade control on the suspension performance of the system.
Experimental Setup
As shown in Figure 8, the experimental setup is established to investigate the effect of the fuzzy cascade control on the suspension system. The suspension system mainly
Experimental Setup
As shown in Figure 8, the experimental setup is established to investigate the effect of the fuzzy cascade control on the suspension system. The suspension system mainly consists of a PC, a dSPACE 1104, a servo controller (ESCON 70/10, Germany) and a prototype, in which the servo controller is used to drive the rotation of the servo motor (EC-max30, Switzerland), and the PMs (NdFeB30) are driven by the servo motor. Moreover, the displacement sensor (EX-V, Keeneshi) is utilized to measure the displacement of the suspended object; the displacement signal of the sensor and the angle signal of the motor are collected by the A/D interface of the dSPACE 1104, and the collection data are stored using the PC. The output signal calculated by the controller is transmitted to the servo controller by the D/A interface of the dSPACE 1104. Figure 9 illustrates the compared results of the displacement disturbance experiments at the cascade control and fuzzy cascade control, in which the compared results between 2.0 s and 4.0 s of the PLFCC and the ALFCC are shown in Figure 9a,b, respectively. In the initial stable state of the system, the displacement disturbance signal is imposed on the system at 3.0 s, and the angle of the PM, motor current and position of the suspended object are tracked. It can be found from Figure 9 that the angle of the PM and the suspension force are decreased as the air gap increases when the displacement disturbance is imposed; in order to maintain the stable suspension state of the suspended object, the angle of the PM adjusted by the controller is increased to increase the suspension force. Furthermore, the results demonstrate that the overshoot of the fuzzy cascade control is smaller, and compared with the cascade control, the response times for the PLFCC and the ALFCC to achieve the stable suspension are 0.50 s and 0.40 s faster, respectively. Meanwhile, the motor current is zero in the new stable state, which can meet the characteristics of zero power. In addition, the whole changing trend of the measurement is consistent with the simulation results. Figure 9 illustrates the compared results of the displacement disturbance experiments at the cascade control and fuzzy cascade control, in which the compared results between 2.0 s and 4.0 s of the PLFCC and the ALFCC are shown in Figure 9a,b, respectively. In the initial stable state of the system, the displacement disturbance signal is imposed on the system at 3.0 s, and the angle of the PM, motor current and position of the suspended object are tracked. It can be found from Figure 9 that the angle of the PM and the suspension force are decreased as the air gap increases when the displacement disturbance is imposed; in order to maintain the stable suspension state of the suspended object, the angle of the PM adjusted by the controller is increased to increase the suspension force. Furthermore, the results demonstrate that the overshoot of the fuzzy cascade control is smaller, and compared with the cascade control, the response times for the PLFCC and the ALFCC to achieve the stable suspension are 0.50 s and 0.40 s faster, respectively. Meanwhile, the motor current is zero in the new stable state, which can meet the characteristics of zero power. In addition, the whole changing trend of the measurement is consistent with the simulation results. Actuators 2021, 10, x FOR PEER REVIEW 13 of 17
Experimental Results of the Force Disturbance
The compared experimental results of the force disturbance are at the cascade control and the fuzzy cascade control as shown in Figure 10, and the system in the stable state is subjected to the external force disturbance signal at 3.0 s. The compared results between 2.0 s and 4.0 s of the PLFCC and the ALFCC are plotted in Figure 10a,b, respectively. The results demonstrate that the air gap is increased when the suspended object is subjected to the external force, and the motor current calculated by the controller is increased to make the angle of the PM increase, which can provide greater suspension force to keep the suspended object stable. At the same time, compared with the cascade control, the response times for the PLFCC and the ALFCC to achieve the stable suspension are 0.60 s and 0.50 s faster, respectively. Moreover, because the external force disturbance is that of a heavy object being placed in the middle of the suspended solids, the placement speed of the external force can affect the response time of the system when the external force is added. However, the motor current calculated by the two control methods of the system is the same under the new stable state.
Experimental Results of the Force Disturbance
The compared experimental results of the force disturbance are at the cascade control and the fuzzy cascade control as shown in Figure 10, and the system in the stable state is subjected to the external force disturbance signal at 3.0 s. The compared results between 2.0 s and 4.0 s of the PLFCC and the ALFCC are plotted in Figure 10a,b, respectively. The results demonstrate that the air gap is increased when the suspended object is subjected to the external force, and the motor current calculated by the controller is increased to make the angle of the PM increase, which can provide greater suspension force to keep the suspended object stable. At the same time, compared with the cascade control, the response times for the PLFCC and the ALFCC to achieve the stable suspension are 0.60 s and 0.50 s faster, respectively. Moreover, because the external force disturbance is that of a heavy object being placed in the middle of the suspended solids, the placement speed of the external force can affect the response time of the system when the external force is added. However, the motor current calculated by the two control methods of the system is the same under the new stable state.
Comparison Analysis
Based on the simulated and the experimental results, the comparisons of the response time for the fuzzy cascade controller at the displacement disturbance and the force disturbance are shown in Table 4. It can be found that the simulated results for the response time are less than 0.4 s, and the response time difference between the two control methods is less than 0.2 s. Furthermore, the response times of the experimental results for the two control methods are less than 0.3 s, and it takes 0.10 s faster for the PLFCC to reach the new suspension state than the ALFCC at the external disturbance. In addition, the response time difference of the simulated results is greater than that of the experimental results. According to the above analysis, there is a certain deviation between the simulated results and the experimental results, which is caused by the error of the mathematical model. The mathematical model of the suspension system is linearized at the equilibrium position, and the controller of the system is the PD controller; hence, there is a certain error in the mathematical model when the system reaches the new equilibrium position. In addition, the mechanical system has a certain hysteresis in operation, and the hysteresis phenomenon also occurs when the F-shape cores are magnetized by the permanent magnet. Therefore, there is a certain deviation between the simulated results and experimental results of response time.
Comparison Analysis
Based on the simulated and the experimental results, the comparisons of the response time for the fuzzy cascade controller at the displacement disturbance and the force disturbance are shown in Table 4. It can be found that the simulated results for the response time are less than 0.4 s, and the response time difference between the two control methods is less than 0.2 s. Furthermore, the response times of the experimental results for the two control methods are less than 0.3 s, and it takes 0.10 s faster for the PLFCC to reach the new suspension state than the ALFCC at the external disturbance. In addition, the response time difference of the simulated results is greater than that of the experimental results. According to the above analysis, there is a certain deviation between the simulated results and the experimental results, which is caused by the error of the mathematical model. The mathematical model of the suspension system is linearized at the equilibrium position, and the controller of the system is the PD controller; hence, there is a certain error in the mathematical model when the system reaches the new equilibrium position. In addition, the mechanical system has a certain hysteresis in operation, and the hysteresis phenomenon also occurs when the F-shape cores are magnetized by the permanent magnet. Therefore, there is a certain deviation between the simulated results and experimental results of response time.
Discussion
The control effects of the designed fuzzy cascade controller on the proposed ZPP-MSS-VFPC have been verified in Section 5.2. Compared with the cascade controller, the
Discussion
The control effects of the designed fuzzy cascade controller on the proposed ZPPMSS-VFPC have been verified in Section 5.2. Compared with the cascade controller, the fuzzy controller is added to the cascade controller, which can modify the parameters of the cascade controller. Hence, the response time and overshoot of the suspension system with the fuzzy cascade controller are improved and decreased, respectively. Furthermore, the two control methods are presented in this paper, in which the fuzzy controllers of the PLFCC and ALFCC modify the PD parameters of the position loop and angle loop in the cascade controller, respectively. In the PLFCC, the position deviation and the change rate of position deviation are utilized to modify the PD parameters of the position loop, which can obtain the high accuracy angle by the P 1 D 1 controller; then the calculated angle and feedback angle are calculated by the P 2 D 2 controller to obtain the motor current. However, in the ALFCC, the position deviation is calculated by the P 1 D 1 controller to obtain the angle, and the angle deviation and the change rate of the angle deviation are used to modify the PD parameters of the angle loop, then the angle deviation is calculated by the modified P 2 D 2 controller to obtain the motor current. It can be found that the position signal which plays a key role in the suspension system is corrected in advance, which can reduce the response time of the system. Therefore, the control effect of the PLFCC is better than that of the ALPCC.
Conclusions
The fuzzy cascade controller for the proposed ZPPMSS-VFPC is designed in this paper. According to the characteristics of the cascade control, the PLFCC and the ALFCC are presented, and the control effects of the two fuzzy cascade controllers on the suspension system at the displacement disturbance and the force disturbance are investigated. The control experiments are carried out to test and verify the control effects of the designed fuzzy cascade for the proposed ZPPMSS-VFPC. The conclusions can be obtained as follows: (1) The proposed ZPPMSS-VFPC can realize the stable suspension at the external disturbance by the designed fuzzy cascade controller. (2) Compared with the cascade controller, the response times of the designed fuzzy cascade controller at the displacement disturbance and the force disturbance are 0.50 s and 0.60 s faster, respectively, and the suspension system with the fuzzy cascade controller has better robustness and smaller overshoot. (3) The response times of the PLFCC at the displacement disturbance and the force disturbance are 0.10 s faster than that of the ALFCC, respectively. (4) The response time difference of the experimental results for the two control methods is 0.10 s, which is less than that of the simulated results.
Overall, the designed fuzzy cascade controller has the advantages of simple structure, strong adaptability and self-adjusting parameters for the single point permanent magnetic suspension system, which lays the foundation for suspension performance analysis in the dust-free transportation system with the ZPPMSS-VFPC. In future work, the suspension performances of the dust-free transportation system will be investigated. Damping coefficients of the suspended object J 1 Moments of inertia of the motor J 2 Moments of inertia of the suspended object k t Torque coefficient of the servo motor i The input current of the servo motor F The external disturbing force acting on the system l l Length of the suspended object f m f 1 Suspension forces generated by the left F-shape core f m f 2 Suspension forces generated by the right F-shape core
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2016-02-18T00:00:00.000
|
5019128
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-016-2836-0",
"pdf_hash": "e9625fcaffae6c9197d59aafc6aa7e5d61b6de9d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46492",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e9625fcaffae6c9197d59aafc6aa7e5d61b6de9d",
"year": 2016
}
|
pes2o/s2orc
|
The extended Infant Feeding, Activity and Nutrition Trial (InFANT Extend) Program: a cluster-randomized controlled trial of an early intervention to prevent childhood obesity
Background Understanding how we can prevent childhood obesity in scalable and sustainable ways is imperative. Early RCT interventions focused on the first two years of life have shown promise however, differences in Body Mass Index between intervention and control groups diminish once the interventions cease. Innovative and cost-effective strategies seeking to continue to support parents to engender appropriate energy balance behaviours in young children need to be explored. Methods/Design The Infant Feeding Activity and Nutrition Trial (InFANT) Extend Program builds on the early outcomes of the Melbourne InFANT Program. This cluster randomized controlled trial will test the efficacy of an extended (33 versus 15 month) and enhanced (use of web-based materials, and Facebook® engagement), version of the original Melbourne InFANT Program intervention in a new cohort. Outcomes at 36 months of age will be compared against the control group. Discussion This trial will provide important information regarding capacity and opportunities to maximize early childhood intervention effectiveness over the first three years of life. This study continues to build the evidence base regarding the design of cost-effective, scalable interventions to promote protective energy balance behaviors in early childhood, and in turn, promote improved child weight and health across the life course. Trial registration ACTRN12611000386932. Registered 13 April 2011.
Background
Childhood overweight and obesity are prevalent across developed and developing nations [1,2]. While prevalence rates may be slowing in some countries [3], the prevalence of overweight and obesity in children in lower socioeconomic environments appears to have increased [4]. Childhood overweight and obesity remain high priorities for public health and it is imperative that we address prevention comprehensively, including through the design of programs that can be scaled up [5]. Key to scalability are issues of intervention dose (timing, intensity and duration) and the capacity to utilize existing infrastructures [6]; both will determine future cost, and hence, sustainability. Determining the minimal dose of intervention needed for sustained change to children's obesity risk behaviours, and in turn body weight, underpins the rationale for this extended version of an established intervention embedded within existing health services in Victoria, Australia.
Similar to other developed nations [1], in Australiã 25% of children aged 2-17 are overweight or obese [7]. The expression of adiposity begins in early life with nearly 23% of Australian 2-4 years affected [7]. Overweight and obesity are recognized to have negative consequences for children's health and wellness during childhood and through to adult life [8,9]. Further, the timing of weight gain is considered important. Evidence highlights that rapid weight gain across the first two years of life is strongly predictive of later adiposity in both childhood and adolescence [10]. It is important therefore that we acknowledge both the early expression of overweight and obesity and that the early years provide a vitally important opportunity for prevention.
Obesity-promoting lifestyle behaviours are established early. For example, Australian [11,12] and international data [13][14][15] report that from an early age children are consuming diets high in energy-dense foods/drinks and low in fruits and vegetables. Our own data from the Melbourne Infant Feeding Activity and Nutrition Trial (InFANT) Program of children aged 9 and 18 months of age showed the consumption of energy-dense, nutrientpoor foods occurs as early as nine months of age with 12% of dietary energy provided by non-core foods [16]. There is also evidence of high levels of sedentary behaviour in early childhood. For example, a systematic review found that the average duration of television viewing for children under the age of two years reported by studies ranged between half an hour and more than five hours per day [17]. Data from the Melbourne InFANT Program shows that television viewing increases across the early child hood period [18]. While little physical activity data exists for children under two years, our research shows that 19-month old children spend an average of 184 min in light-intensity physical activity and 47 min in moderateto vigorous-intensity physical activity each day, with the remainder of their day spent sedentary [19].
The early establishment of obesity promoting behaviours is important both because they will determine weight gain trajectories but also because these behaviours are known to track. For example, there is evidence of tracking of children's dietary [16,20,21] and physicalactivity [22,23] behaviours from childhood to adolescence and adulthood. Early childhood thus provides a unique and circumscribed opportunity in which we might reduce risk of lifetime adiposity; a time within which to seek to establish lifestyle behaviours that will promote health and minimize the risk of the development of obesity and associated co-morbidities throughout life.
Children's eating, physical-activity and sedentary behaviours are learnt and sustained in the home and evidence from our own [24] and others' studies suggest this environment may improve children's weight and energy-related behaviours [25,26]. Parents shape children's emerging food and physical activity choices through a variety of means including: their knowledge regarding eating, physical activity and sedentary behaviours; parenting style and feeding style; modelling of eating/activity; the food/facilities made available and accessible; food portion sizes and the use of food as rewards [27].
Several existing randomized controlled trials (RCTs) report that parent-focused interventions from birth hold promise for early childhood obesity prevention, with trials reporting modest effects on early childhood weight [26], diet [24][25][26] and sedentary behaviours [24]. In addition, results from a prospective meta-analyses incorporating these RCTs (Early Prevention of Obesity in CHildhood (EPOCH) Trial n = 2196) provides further support for this focus on interventions in early life [28]. In that analysis, compared to controls, intervention children at age 18 to 24 months had a significantly reduced zBMI, were breastfed for longer, spent less time viewing television, and were significantly less likely to be exposed to a range of obesity-promoting feeding behaviours, specifically controlling feeding style, use of food as a reward, and pressure to eat [29].
However, while these pooled data showed important effects on zBMI, three of the four constituent studies have recently reported that there were no differences in zBMI between intervention and control groups at age five [30][31][32]. The prospective meta-analyses will be undertaken to confirm this loss of intervention effect over time when the last of the four trials [33] collects their 5-year old post-intervention data (2016). This potential failure to maintain intervention effect is perhaps not surprising given the complexity and the dynamic nature of the targeted energy-related behaviours across early childhood. The substantial developmental change occurring in the early years frequently heralds a particularly challenging period for parents. For example, parents report increased rejections of food and proposed limits to screen time [34] as their child moves through the toddler years. It is likely that parents will require ongoing support to develop strategies that can address the evolving challenges they face.
One of these RCTs, the Melbourne Infant Feeding Activity and Nutrition Trial (InFANT) Program (herein referred to as the Melbourne InFANT Program), informed the protocol for the current study, known as the In-FANT Extend Program. The methodology employed for the Melbourne InFANT Program have been previously published [35]. In brief, the Melbourne InFANT Program was a cluster-randomized controlled trial of a community-based, early obesity prevention program, designed to be integrated into existing service delivery systems. The intervention comprised 6 × 2 h sessions delivered quarterly to first-time parents from when infants were approximately three months of age to approximately 18 months of age. Sessions were delivered within existing first-time parent groups, established by community Maternal and Child Health nurses (MCHn) as part of the free universal health care system in Melbourne, Australia. These groups had, in previous years, continued without nurse facilitation for approximately 18-months [36]. The Melbourne InFANT Program commenced where the MCHn involvement with the groups ceased, and parents took over their own management of the groups. The Program sessions utilized anticipatory guidance, providing information and developing skills (in anticipation of their relevance), regarding what and how to feed, active play opportunities, alternatives to screen time and restraint, and parent modelling of healthy eating, physical activity and reduced sedentary behaviours. The group format promoted discussion of strategies, successes and overcoming barriers to key messages. The control group received usual care as well as quarterly newsletters (six in total) on general child health topics not related to obesity-promoting behaviours.
Strengths of the Melbourne InFANT Program included: comprehensive assessment of targeted behaviours using gold standard methods (objective assessment of body mass index (BMI), physical activity and sedentary time, and three non-sequential 24-h dietary recalls); use of existing social groups to potentially facilitate, support and increase intervention dose by non-facilitated contacts between sessions; scalability -as the program was developed to be both low dose and community-based, allowing feasible transfer into existing public health infrastructures; high recruitment and retention rates and incorporation of an economic evaluation [24]. The Melbourne InFANT Program has been adapted for use within eight local government areas in Victoria, Australia and translation of this program from RCT to community use is currently being evaluated.
The InFANT Extend program
The current study builds on the early outcomes of the Melbourne InFANT Program [24,35]. This cluster randomized controlled trial will test the efficacy of an extended (33 versus 15 month) and enhanced (use of webbased materials, and Facebook® engagement), version of the original InFANT intervention in a new cohort. Outcomes at 36 months of age will be compared against the control group.
Primary outcomes
In comparison to the control group children, the intervention group children at 36 months of age will exhibit lower body weight and reduced waist circumference.
Secondary outcomes
In comparison to the control group infants, the intervention group infants at 18 and 36 months of age will: Consume more serves of fruits and vegetables, and fewer serves of sugar-sweetened beverages and energy-dense snack foods; Spend more time being physically active and less time in sedentary behaviours, specifically television viewing. Exhibit improved energy-related lifestyle patterns (combining measures of diet, physical activity and screen time) In comparison to the control group parents, the intervention group parents (when child is 18 and 36 months of age) will demonstrate: Greater knowledge regarding infant eating, physical activity and sedentary behaviours and more positive attitudes /beliefs regarding their capacity to influence these behaviours; Greater adoption of desired feeding strategies, including parental modelling of healthy eating, the division of responsibility in feeding, and increased availability of promoted (targeted) foods in the home; Greater adoption of strategies, including modelling, for increasing opportunities for physical activity and reducing opportunities for sedentary behaviours.
Overall study design
This study will enable assessment of the effectiveness of a 33-month parent-focused child obesity-prevention intervention (compared with a no-intervention control).
Ethical approval was granted by Deakin (EC-175-2007 (Part 2-2007-175) and the Department of Education and Early Childhood Development (Victoria, Australia) (2011_001000). This trial is registered with the Australian New Zealand Clinical Trials Registry (ANZCTR 12611000386932).
Study participants and recruitment
The recruitment process for the InFANT Extend Program will largely replicate that used in the original Melbourne InFANT Program [24,35]. An important exception relates to the local government area (LGA) recruitment strategy. To seek to address the over representation of university educated women in the Melbourne InFANT Program [24] seven relatively disadvantaged Victorian LGAs will be purposively recruited. These LGAs will selected by the group level variable Index for Socio Economic Disadvantage (IRSD) [37] such that all will be in the lowest tertile of disadvantage (i.e. most disadvantaged). It is important to note that there will be distinct areas of greater and lesser socioeconomic advantage within each LGA and these are indicated by the IRSD of postcode regions within each LGA. For practical reasons, each LGA will sit within a 75km radius of the research center (Geelong, Victoria, Australia).
Eighty percent of eligible first-time parents' groups (rounded to next even number to ensure equal within LGA allocation to control and intervention groups) within each of these LGAs will be randomly selected and approached by research staff for recruitment during one of the standard nurse-facilitated group sessions. Individual parents will be eligible to participate if they give informed written consent, are first-time parents and are literate in English. Infants with chronic health problems likely to influence height, weight, levels of physical activity or eating habits will be excluded from analyses but will be permitted to participate in the program.
Parent groups will be eligible if eight or more parents choose to enroll in the study. To facilitate inclusion of participants experiencing disadvantage, groups commencing in MCH centres considered relatively socioeconomically disadvantaged (as determined by the post code of the region within each LGA), will be eligible if six or more parents enroll. When first-time parents' groups decline to participate, the next randomly selected group within the LGA will be approached. Non-consenting parents within participating groups will be permitted to attend the intervention sessions, but will not be required to provide data or be contacted by the research team in any other way.
Randomization of first-time parents' groups (clusters) to intervention or control will occur after recruitment to avoid selection bias [38]. Randomization (stratified by LGA) will be conducted by an independent statistician. While parents will not be blinded to allocation, they will not be informed of the study aims or hypotheses and the recruiting emphasis will focus on promoting healthy eating and active play from the start of life. Staff measuring height and weight will not be blinded to intervention status as they will deliver the intervention, however, data entry and analyses will be conducted with staff blinded to participant's group allocation.
Sample size Power and sample size
Child weight and waist circumference are the primary outcome measures for this study and considered the most difficult outcomes to change. Secondary aims of this intervention are to increase children's fruit and vegetable consumption and time spent physically active, and to decrease consumption of sweetened drinks and time spent sedentary.
Australian national data [39] report an average weight and waist circumference in 3-year old children of 16.4kg (SD = 2.2kg), and 51.1cm (SD = 3.8cm) respectively [39]. Further, these data report that at this age Australian children consume an average of 85g (SD = 82g) of vegetables (not including potato), 202g (SD = 129g) of fruit and 78mL (SD = 121mL) of sweetened drinks daily. Additionally this group is reported to spend an average of 117min (SD = 30min) being active and 662min (SD = 71min) being sedentary daily [40].
To detect a 5% difference in weight between groups at age three (reducing average weight in the intervention group by approximately 1kg), with Type I and Type II errors of 5% and 20% respectively, the number of subjects required is given by: This sample size will also allow us to detect differences of: 4% in BMI (i.e. 0.66kg/m 2 ), 3% in waist circumference (i.e. 1.5cm), 36% in vegetable consumption (i.e. 31g, equivalent to approximately ½ a serve/day); 24% in fruit consumption (i.e. 49g, equivalent to approximately 1/3 of a serve/day); 60% in sweetened drink consumption (i.e. 47mL/day); 10% in active time (i.e. 12min/day) and 4% in sedentary time (i.e. 26min/day).
As this study will randomize by first-time parents groups, we need to take account of within-group clustering and increase our sample size according to the design effect/inflation factor (DEFF). The design effect is given by: DEFF = 1 + [(n-1) x ICC], where n is the number of people in each cluster and the ICC is the intra-class coefficient. Based on our previous experience working with first-time parents groups, we estimate that each cluster contains an average of nine mother-infant pairs, and the ICC is estimated to be 0.1 (based on data from the Melbourne InFANT Program), thus the design effect is 1.8. We will also adjust our sample size to account for estimated attrition over the three years of the study (<25%).
Therefore, our final sample size is: (223 x 1.8) / 0.75 = 535. To achieve an equal number of groups in each arm of the trial (mean number of participants in groups = 9), we will aim to recruit 540 participants from a total of 60 first-time parents' groups (30 groups and 270 participants in each arm of the trial).
The CONSORT flow diagram is outlined in Fig. 1.
Intervention group
The intervention arm will receive the previously trialed Melbourne InFANT Program delivered in firsttime parent groups until child is aged 18-months.
The intervention content has been reported elsewhere [24,35] and is outlined in the background in this paper. In the current InFANT Extend Program, the Melbourne InFANT Program content will be largely replicated however some enhancement, informed by both quantitative [24] and qualitative [41] outcomes will be incorporated (e.g. parent's desire for more practical strategies for preparing food and playing with children). To reflect the transition in technology preference of parents over the time since the Melbourne InFANT Program was trialed (2008-2010) we plan to alter some aspects of program delivery. While the original Program content was delivered in groups with the aid of a DVD (which parents took home), in the present study DVDs will no longer be used and all content will be made available online via dedicated webpages. The use of webpages provides opportunities previously not afforded, to monitor website use and thus inform at a group level, the dose of intervention received. In addition, during the first 18 months, the original Melbourne InFANT Program will be extended by the addition of first-time parent group Facebook® (Menlo Park, CA, USA) Pages. On-line engagement will be restricted to the individual first-time parent group, and will be mediated by the group facilitator, a nutrition expert, for up to one hour per week. This use of social media is anticipated to promote parent engagement between their regular social meetings, and to provide opportunities to: share their child feeding and activity related outcomes (knowledge, questions, successes, challenges); enable the group facilitator to provide support; and to reiterate program messages.
As in the original Melbourne InFANT Program, parents will be mailed one newsletter each quarter. These newsletter were well received in the original program and provided an important opportunity, given the infrequency of intervention sessions (three monthly), to engage mothers and to reinforce key program messages. This aspect of the program will be modified and Group delivery beyond 18 months is not feasible given we know that first time parent groupsthe site of delivery -are often not sustained after this time [36]. Furthermore, our previous experience highlighted that while attendance to 18 months was relatively high, attendance figures began to decline at around 12 months of age. Therefore, in the current study, the Melbourne InFANT Program content will be reiterated and extended through delivery of intervention messages via six emailed newsletters (3 monthly from child age 18 months to 3 years). Newsletters will contain web links directing participants to specifically designed content within the Melbourne InFANT Program website (discussed below). This content will seek to reinforce and build upon skills and knowledge developed in the groupdelivered intervention. It will also introduce new knowledge and skills known to be of relevance to parents in the promotion of healthy eating and physical activity behaviors across the often behaviorally challenging toddler years. In addition, participants will be reminded of newly developed toddler focused content on the Program website through monthly emails and Facebook® posts. The focus of each intervention session (3-18 months) and each newsletter provided in the extension (21-36 months) is outlined in Table 1.
Control group
The control group families will receive usual care from their MCHn. In addition, these families will be sent general health newsletters (e.g. dental health, sun protective behaviours, general safety) every three months across the child's first three years (11 newsletters in all). Consistent with the intervention group, control group participants will receive birthday and Christmas cards. These families' participation will be acknowledged with gifts (to a maximum value of $15.00) on receipt of completed questionnaires.
Data collection Measures
As outlined in Table 2, parent and infant data will be collected at 3, 18 and 36 months. Standard demographic and socio-economic information will be collected by parental report at baseline (3 months). Additional measures to be collected are detailed below.
Primary measures
Child's anthropometry Height, weight and waist circumference will be measured by study staff who will undergo training with a paediatrician specialized in clinical nutrition. Recumbent length (in infants) will be assessed using a calibrated length mat/height and from To support parents to delay weaning/introduction of solids to around 6 monthsTo provide basic principles related to best practice in early feedingTo introduce basic concepts regarding parental feeding styles and how these might relate to beliefs about parenting.
6mo Adoption by parents of a feeding style and TV viewing habitsFood rejection by infants To develop parents' understanding regarding:*feeding styles and impact on children's eating*basic nutrition principals* sedentary behaviours in familiesTo introduce national recommendations for no screen exposure (television viewing) until 2 years of age and reasons for thisTo develop parents' understanding of 'normal' food rejection and how to interpret and manage 9mo Increasing use of TVParents' increased awareness of child mobility. Infant crawls and pulls self upright and walks with handhold To develop understanding regarding:* parental modeling of eating, sedentary and physical activity behaviours* impact of eating, activity and sedentary behaviours on health of children and adults and the provision of opportunities to promote healthy eating and engagement in play 12 mo &15 mo& 18 mo Increasing autonomy of child in eating and activityInfant stands without support and beginning to walk Continued development of themes/skills regarding:* eating and moving for healthparents and children* how to feed/how to manage food rejection and demands* providing fail-safe food and activity environments 21,24,27,30,33,36 mo Child independence in activity and feeding; desire to be in control and to choose Continued development of themes/skills regarding:* eating and moving for healthparents and children* how to feed/how to manage food rejection and demandsPractical strategies for incorporating more active play into family routinesDevelopment of fundamental movement skills through everyday play* providing supportive food and activity environments Key: mo -months standing age will be assessed using a calibrated stadiometer. Waist circumference (minimum circumference between the rib cage and iliac crest) will be measured using a non-stretchable tape measure at 18 and 36 months of age. BMI z score will be calculated using WHO growth standards [42].
Secondary measures
Child's dietary intake Children's dietary intake will be assessed at 18 months and 3 years with a parent completed food frequency questionnaire. This 79 item FFQ has been used within the original InFANT Program and is currently being validated against the three days of 24 h recall also collected in that study when children were 18 and 36 months of age. Data will be analyzed using an in-house, specially designed database using the 2007 Australian Food and Nutrient Database (AUSNUT) Database [43]
Measurement of physical activity and sedentary behaviours
Seven days of objectively assessed physical activity data will be collected using accelerometers at 18 and 36 months. At this time children will be fitted with an ActiGraph accelerometer which they will wear for eight consecutive days (which will capture weekday and weekend day activity and sedentary patterns) [44]. ActiGraph monitors are small, light and unobtrusive and are worn on a belt around the waist. This methodology was successfully employed during the original InFANT Program intervention when children were aged approximately 19 months. ActiGraph counts correlate (up to r = 0.70) with energy expenditure estimated by direct observation and doubly-labelled water respectively, among 3-5 year old children [45,46] and correlate highly (r = 0.87) with energy expenditure estimated by indirect calorimetry among children [47]. Counts will be recorded at 15-s epochs to accurately capture the sporadic and intermittent activity patterns of young children. Data will be downloaded and then reduced to total counts/day, minutes/day and percentage of time spent sedentary, and in light-, moderate-and vigorous-intensity physical activity using age-appropriate cut points [48]. In addition, indirect measures of children's physical activity will be assessed by parental report including: parental engagement in physically active play and the number of hours the child typically spends playing outdoors on weekdays and weekend days. In addition, parents will be asked to indicate how much time (hours/min) their child usually spends watching television/DVD and playing electronic games on a typical weekday (Monday-Friday) and on a typical weekend day (Saturday and Sunday) and to estimate the amount of time spent in situations that restrict movement (e.g. stroller, playpen) at 3, 9, 18 and 36 months. Test-retest reliability of these items in a previous study ranged from ICC = 0.5-0.9 [49]. Parental reports of their child's "usual" TV viewing has been shown to correlate with both videotaped observations of the child's TV viewing [50] and with parental diaries of viewing [51].
Parent's diet
Parents' dietary intake will be assessed using a validated Food Frequency Questionnaire, The Cancer Council's Dietary Questionnaire for Epidemiological Studies (Version 3.1) when child is 3, 18 and 36 months of age [52].
Parent's physical activity and television viewing
Parents will report their physical activity behaviours using the validated Active Australia Survey [53] at 3, 18, 36 months. Parents will also report the total time they spend watching television during their leisure-time in a typical week [54].
Home food environment
A range of home food environment variables will be assessed at 3, 18 and 36 months. Aspects of nutrition knowledge focused around nutrition targets of the intervention will be assessed using modified subscales of the [57]. Covert restriction will be assessed using a validated subscale [58]. Opportunities for modelling of healthy eating (e.g. sharing family meals) and home food availability will be measured using previously established tools [59].
Parental confidence regarding promoting healthy eating (and reducing sedentary time/promoting physical activity) will be assessed using established tools [60].
Home physical activity and sedentary environment
Parents will be asked general questions relating to their knowledge about physical activity in early childhood, their interactions with their child around physical activity and an audit checklist on the physical activity and sedentary home environment at 3, 18, 36 months.
Economic evaluation Health service use
Exposure to the InFANT Extend Program may have implications for families' use of the broader health system if the information provided through the program reduces parent's help-seeking behaviour elsewhere. It is therefore important to monitor use of relevant health services, especially MCHn visits as the primary health provider in this population. Parents will be asked to report the use of services related to their infant's or their own weight, diet/eating behaviours or physical activity in order to capture any differential use of health and other services associated with the intervention. Parents will be asked to report specifically on their use of MCHns, and more generally on a broad range of services. In each case, parents will be asked to report the number of occasions of service use and, where applicable, any financial cost. The investment of resources involved in reported use of health services will be costed using established unit costs for wages, services and material costs in Australian dollars.
Statistical analyses
Intervention effects will be assessed based on intention to treat principles and taking into account the clusterbased sampling design. Generalized Estimating Equations [61] using the xtgee function in Stata, will be used to fit longitudinal regression models enabling comparison of primary outcome variables between intervention and control groups, adjusted for baseline values where appropriate (infants were not consuming foods nor mobile at baseline, i.e. 3 months of age, hence adjustment for diet or physical activity variables is not possible).
Discussion
The prevalence of overweight and obesity in early childhood remains high and is determined in part, by eating, physical activity and sedentary behaviours. These behaviours are predominantly learnt and supported in the home during the first few years of life, and are likely to influence health throughout life. Given this, the early years hold promise as a time when obesity prevention may be most effective.
While there is a growing body of evidence to support the proposition that family focused interventions can improve children's energy-related behaviours and weight, there remain many unanswered questions regarding the dose (timing, intensity and duration) of intervention delivery, key issues for translation of interventions into real world settings. The issue of scalability is of fundamental importance. The current study builds upon the Melbourne InFANT Program which is currently being trialed in community settings across Victoria Australia. This opportunity for translation speaks to the program's potential scalability. The translation (uptake, modification, facilitator and end user satisfaction) of the Melbourne InFANT Program is currently being evaluated. The InFANT Extend Program's focus on the toddler years (18 months to three years) explores the opportunity to build on the Melbourne InFANT Program's early support regarding energy balance behaviours (3-18 months of age) through the reiteration of key messages and the timely extension of this support through the introduction of new knowledge, ideas and skills across the toddler years.
In summary, this cluster-randomized controlled trial assessing the efficacy of a low dose, scalable, web-based addition to the existing Melbourne InFANT Program will provide important information regarding capacity and opportunities to maximize early childhood intervention effectiveness over the first three years of life. This study continues to build the evidence base regarding the design of cost-effective, scalable interventions to promote protective energy balance behaviors in early childhood, and in turn, promote improved child weight and health across the life course.
|
v3-fos-license
|
2020-02-26T14:04:36.094Z
|
2020-02-01T00:00:00.000
|
211477491
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/antiox9020172",
"pdf_hash": "63273c60e7ffc1e379d8ba784e99519fa06a3525",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46493",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "0d5ee33d7bf6210b406f2aea1305908a0b4d8bed",
"year": 2020
}
|
pes2o/s2orc
|
The Effect of Nano-Epigallocatechin-Gallate on Oxidative Stress and Matrix Metalloproteinases in Experimental Diabetes Mellitus
Background: The antioxidant properties of epigallocatechin-gallate (EGCG), a green tea compound, have been already studied in various diseases. Improving the bioavailability of EGCG by nanoformulation may contribute to a more effective treatment of diabetes mellitus (DM) metabolic consequences and vascular complications. The aim of this study was to test the comparative effect of liposomal EGCG with EGCG solution in experimental DM induced by streptozotocin (STZ) in rats. Method: 28 Wistar-Bratislava rats were randomly divided into four groups (7 animals/group): group 1—control group, with intraperitoneal (i.p.) administration of 1 mL saline solution (C); group 2—STZ administration by i.p. route (60 mg/100 g body weight, bw) (STZ); group 3—STZ administration as before + i.p. administration of EGCG solution (EGCG), 2.5 mg/100 g b.w. as pretreatment; group 4—STZ administration as before + i.p. administration of liposomal EGCG, 2.5 mg/100 g b.w. (L-EGCG). The comparative effects of EGCG and L-EGCG were studied on: (i) oxidative stress parameters such as malondialdehyde (MDA), indirect nitric oxide (NOx) synthesis, and total oxidative status (TOS); (ii) antioxidant status assessed by total antioxidant capacity of plasma (TAC), thiols, and catalase; (iii) matrix-metalloproteinase-2 (MMP-2) and -9 (MMP-9). Results: L-EGCG has a better efficiency regarding the improvement of oxidative stress parameters (highly statistically significant with p-values < 0.001 for MDA, NOx, and TOS) and for antioxidant capacity of plasma (highly significant p < 0.001 for thiols and significant for catalase and TAC with p < 0.05). MMP-2 and -9 were also significantly reduced in the L-EGCG-treated group compared with the EGCG group (p < 0.001). Conclusions: the liposomal nanoformulation of EGCG may serve as an adjuvant therapy in DM due to its unique modulatory effect on oxidative stress/antioxidant biomarkers and MMP-2 and -9.
in nanoparticles [35]. Catechin nanoemulsions proved to be stable for long periods of time (120 days at 4 • C) [36]. Liposomes, assembled from phospholipid bilayers similar to cell membranes, are one of the nanoparticles frequently used for drug delivery [23]. Their biphasic character makes them suitable for being carriers for both hydrophilic (in the central aqueous compartment) and hydrophobic (in lipid bilayers) compounds [37,38]. Nanoformulation by encapsulation in liposomes could also facilitate the solubility for hydrophobic particles [4]. Through all of these properties, liposomes can offer an enhanced bioavailability, stability, and shelf life for sensitive ingredients [39].
The aim of this study was to investigate the effect of two forms of EGCG (EGCG solution and liposomal EGCG) on oxidative stress parameters, antioxidant capacity, serum MMP-2 and -9, and pancreatic and liver function in STZ-induced diabetes mellitus in rats.
Experimental Model
The study was approved by the Ethic Committee of the University and by the National Sanitary Veterinary Authority number 137/13.11.2018. Twenty-eight male Wistar-Bratislava rats were procured from the Centre of Experimental Medicine, University of Medicine and Pharmacy, Cluj-Napoca, Romania. The rats weighed 200-250 g, were kept in polypropylene cages, with day-night regimen, at constant temperature (24 ± 2 • C) and humidity (60 ± 5%). Free access to food (standardized pellets from Cantacuzino Institute, Bucharest, Romania) and water was provided to all animals. The animals were randomly divided into 4 groups (7 rats/group). The groups were organized as follows: group 1-control group (C)-with intraperitoneal (i.p.) administration of 1 mL saline solution, group 2-STZ administration by i.p. route (STZ), group 3-STZ administration as before + i.p. administration of EGCG solution (EGCG), group 4-STZ administration as before + i.p. administration of liposomal EGCG (L-EGCG). Each medication was dissolved in saline solution (0.9% sodium chloride) and the volume administrated i.p. was 1 mL [19]. The following doses were used: STZ-60 mg/100 g body weight (b.w.) [40]; EGCG in saline solution or in liposomal form were freshly prepared and were administrated i.p. in a dose of 2.5 mg/100 g b.w./day as pretreatment, two consecutive days before STZ administration [41]. Intraperitoneal administration was preferred as a method that improves EGCG bioavailability, compared to low bioavailability with oral administration [42].
Blood samples were taken at 48 h after STZ administration, under ketamine anesthesia (5 mg/100 g bw, i.p. route) from retro-orbital sinus, followed by rat euthanasia by cervical dislocation [43]. Rats with glucose higher or equal to 200 mg/dL were considered to have diabetes mellitus [20].
Preparation and Physicochemical Characterization of EGCG-Loaded Liposomes
For the preparation of liposomes, we used a modified film hydration method [44,45]. The lipid double-layer components, having a 70 mM concentration (DPPC:MPEG-2000-DSPE:CHO = 4.75:0.25:1 molar ratio), were dissolved in ethanol in a round-bottomed glass flask. Ethanol was evaporated at 45 • C under low pressure; the lipid film product was hydrated with a solution of EGCG diluted in highly purified water, pH = 5.00, at the same temperature. The resulted liposomal Antioxidants 2020, 9, 172 4 of 15 dispersion was then extruded through polycarbonate membranes with 200 nm final pore dimension, with LiposoFastLF-50 equipment (Avestin Europe GmbH, Mannheim, Germany). Unencapsulated EGCG particles were removed by dialysis method, using Slide-A-Lyzer filters (cassettes) with 10 kDa molecular weight cut-off.
To assess the amount of liposomal-loaded EGCG, we used a spectrophotometric method-the reaction with Folin-Ciocâlteu reagent (Merck, Darmstadt, Germany) [46]. During this procedure, a dilution of liposomal dispersion with methanol 1:10 (v/v) was made, and a UV-VIS spectrophotometer (Specord 200 Plus, Analytik Jena, Überlingen, Germany) measured the absorbance value.
The size and polydispersity index of liposomes were assessed by dynamic light scattering method (with a 90 • scattering angle), and the zeta potential was measured by laser Doppler electrophoresis; a Zetasizer Nano ZS analyzer was used for both assessments (Malvern Instruments Co., Malvern, UK).
The mean liposomal concentration of the L-EGCG solution was about 900 µg/mL, and encapsulation efficiency was over 80%. Liposomal vesicles' mean size was 170 nm, and polydispersity index was less than 0.2, meaning that the vesicles' size and uniformity were appropriate to ensure a prolonged circulation in the blood. Aggregative stability was ensured by values of 51.83 mV of the zeta potential.
Oxidative Stress and Antioxidant Parameters Assessment
Parameters associated with oxidative stress and antioxidant status were determined from collected blood samples. The parameters used to assess oxidative stress were: malondialdehyde (MDA) [47], indirect nitric oxide (NOx) synthesis assessment [48], and total oxidative status (TOS) [49]. Antioxidant status parameters were represented by total antioxidant capacity of plasma (TAC) [50], thiols [51], and catalase [52]. All measurements were performed using a Jasco V-350 UV-VIS spectrophotometer (Jasco International Co, Ltd., Tokyo, Japan). Matrix metalloproteinases (MMPs) were appraised from serum using a rat ELISA kit (Boster Biological technology, Pleasanton, CA, USA) and a Stat Fax 303 ELISA reader (Quantikine, McKinley Place NE, MN, USA).
Assessment of Beta Pancreatic Cells and Hepatic Cells Function
Glycemia was measured at 48 h after DM induction, as it was previously observed that STZ induces significant beta cell death at 48 after administration [53]. Glycemia was also used as a parameter for pancreatic function changes induced by experimental diabetes mellitus. Hepatic cytolysis was assessed by serum levels of aspartate aminotransferase (AST) and alanine aminotransferase (ALT) measured by a standardized technique (Vita Lab Flexor E, Spankeren, The Netherlands) [40].
Data Analysis
The SPSS software package version 21.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis and graphic representations. The acceptable error threshold was p = 0.05. In order to describe the continuous quantitative data, we used the arithmetic mean and the standard deviation (SD). The distribution of investigated markers in groups was plotted as individual values (circles) and median (line), as recommended by Weissgerber and coauthors [54]. The Kruskal-Wallis ANOVA was used to test the differences in the investigated markers. The Mann-Whitney test was used in post hoc analysis when significant differences were identified by the Kruskal-Wallis ANOVA test.
Results
No rat died during the experiment, so the analysis was conducted on all seven rats in each group. All P values for comparison between groups are presented in Supplementary Table S1.
In our experimental model, diabetes mellitus was successfully induced by STZ: all rats that received STZ were definitely diabetic, proven by glycemia >200 mg/dL and values significantly higher in diabetic rats compared to control group: 401.81(11.31) mg/dL versus 84.27 (2.87) mg/dL, respectively (expressed as mean and standard deviation), with a p-value < 0.001. Also, hepatic damage was detected in the STZ group, quantified by significant elevation of transaminases AST and ALT (Table 1). Oxidative stress parameters (MDA, NOx, and TOS) significantly increased after induction of DM (p-values <0.001 in all items, Figure 1a-c, Table 1). MMP-2 and MMP-9 levels were significantly higher in the STZ-induced DM group compared with control group (p-values <0.001, Figure 4a,b, Table 1). Serum antioxidant capacity, measured by thiol, catalase, and TAC levels, was significantly reduced in diabetic rats compared to control animals (p-values < 0.001 in all items, Figure 2a-c, Table 1).
In the diabetic group pretreated with EGCG, oxidative stress parameters NOx and TOS were significantly reduced compared to the untreated STZ group (with p-values of 0.017 and <0.001, respectively, Figure 1b,c). respectively (expressed as mean and standard deviation), with a p-value < 0.001. Also, hepatic damage was detected in the STZ group, quantified by significant elevation of transaminases AST and ALT (Table 1). Oxidative stress parameters (MDA, NOx, and TOS) significantly increased after induction of DM (p-values <0.001 in all items, Figure 1a-c, Table 1). MMP-2 and MMP-9 levels were significantly higher in the STZ-induced DM group compared with control group (p-values <0.001, Figure 4a,b, Table 1). Serum antioxidant capacity, measured by thiol, catalase, and TAC levels, was significantly reduced in diabetic rats compared to control animals (p-values < 0.001 in all items, Figure 2a-c, Table 1).
In the diabetic group pretreated with EGCG, oxidative stress parameters NOx and TOS were significantly reduced compared to the untreated STZ group (with p-values of 0.017 and <0.001, respectively, Figure 1b All antioxidant parameters (thiols, catalase, and TAC) were significantly higher in the STZ-treated group (p-values of < 0.001, 0.026, and 0.017 respectively, Figure 2a-c).
No significant differences were noted in MDA and MMP values between the pretreated group with EGCG compared to the untreated STZ group (Figure 1a, Figure 4a,b). Also, glycemia and liver parameters were not significantly different in the EGCG pretreated group, with the exception of a decrease in ALT (p-value = 0.038, Figure 3c). Antioxidants 2020, 9, x FOR PEER REVIEW 7 of 15 compared to control; β-STZ + EGCG compared to control; ε-STZ + EGCG compared to STZ; γ-STZ + L-EGCG compared to control; λ-STZ + L-EGCG compared to STZ; μ-STZ + L-EGCG compared to STZ + EGCG.
In the STZ group pretreated with L-EGCG, all oxidative stress parameters were significantly decreased and serum antioxidant capacity parameters were all increased, with better results compared to the STZ group pretreated with EGCG (p < 0.017, Figures 1 and 2). Also, the L-EGCG solution improved glycemic values and decreased transaminases levels better than EGCG (p < 0.001, Figure 3). The MMP levels were significantly lower in the L-EGCG-treated group compared to the diabetic untreated group or compared to the STZ group pretreated with EGCG (<0.001, Figure 4). as pretreatment. The symbol-number codes correspond to the p-values < 0.05 as follows: α-STZ compared to control; β-STZ + EGCG compared to control; ε-STZ + EGCG compared to STZ; γ-STZ + L-EGCG compared to control; λ-STZ + L-EGCG compared to STZ; µ-STZ + L-EGCG compared to STZ + EGCG.
In the STZ group pretreated with L-EGCG, all oxidative stress parameters were significantly decreased and serum antioxidant capacity parameters were all increased, with better results compared to the STZ group pretreated with EGCG (p < 0.017, Figures 1 and 2). Also, the L-EGCG solution improved glycemic values and decreased transaminases levels better than EGCG (p < 0.001, Figure 3). The MMP levels were significantly lower in the L-EGCG-treated group compared to the diabetic untreated group or compared to the STZ group pretreated with EGCG (<0.001, Figure 4). The Kruskal-Wallis ANOVA test identified significant differences between the groups with diabetes and EGCG pretreatment for all evaluated parameters (p-values < 0.0001). The post hoc analysis identified significant differences in most of the cases with better protection for the EGCG-treated group, and significantly higher protection when liposomal EGCG solution was used (Figures 1-4).
Protective Effects of EGCG on Pancreatic and Hepatic Cell Function in Diabetic Rats
In our study, EGCG reduced blood glucose levels in pretreated animals but the reduction was not statically significant (Table 1, Figure 3). Some of the antidiabetic effects of EGCG are suggested to be the suppression of appetite, adjustment of dietary fat emulsification in the gastrointestinal tract, inhibition of gastrointestinal lipolysis, and reduction of nutrient absorption enzymes [55]. The most significant hypoglycemia was obtained in liposomal EGCG-pretreated groups. This indicates a protective effect of EGCG on pancreatic cell function. Meng et al. showed that EGCG can inhibit inflammation by reducing reactive oxygen species and downregulating the production of inducible nitric oxide synthetase (iNOS) [56]. Furthermore, EGCG increases glucose tolerance [57] and decrease HbA1c levels in STZ-induced diabetes in rats, contributing to further prevention of diabetic complications [58]. Another suggested mechanism of EGCG's protective effect is the increased glucose uptake due to promoting the glucose transporter-4 (GLUT4) translocation in skeletal muscle, through activation of both phosphoinositol 3-kinase and AMP-activated protein kinase pathways [58]. EGCG also increases tyrosine phosphorylation of insulin receptors, having an insulin-like effect on H4IIE hepatoma cell lines [59].
The liver is extremely adversely affected in type 1 diabetes mellitus. In our study, we found elevated AST and ALT levels, showing liver damage, in STZ diabetic rats (Table 1, Figure 3). In STZ-induced diabetes, transaminases elevation is the consequence of the toxic effect of STZ on hepatocytes, which induces lipid peroxidation, oxidative stress enhancement, peroxisome proliferation, and mitochondrial dysfunction [60][61][62]. Rodriguez et al. identified increased NO levels and hepatic oxidative stress in STZ-induced diabetic rats [63]. In our study, pretreatment with EGCG decreased ALT levels, preventing hepatic damage induced by STZ. Furthermore, liposomal EGCG administration significantly reduced AST and ALT values, confirming the enhanced protective effect of L-EGCG on hepatic cells. Other studies also demonstrated the hepatic-protective effect of green tea extracts in hepatic injury reflected by decreased serum transaminase levels, and improved The Kruskal-Wallis ANOVA test identified significant differences between the groups with diabetes and EGCG pretreatment for all evaluated parameters (p-values < 0.0001). The post hoc analysis identified significant differences in most of the cases with better protection for the EGCG-treated group, and significantly higher protection when liposomal EGCG solution was used (Figures 1-4).
Protective Effects of EGCG on Pancreatic and Hepatic Cell Function in Diabetic Rats
In our study, EGCG reduced blood glucose levels in pretreated animals but the reduction was not statically significant (Table 1, Figure 3). Some of the antidiabetic effects of EGCG are suggested to be the suppression of appetite, adjustment of dietary fat emulsification in the gastrointestinal tract, inhibition of gastrointestinal lipolysis, and reduction of nutrient absorption enzymes [55]. The most significant hypoglycemia was obtained in liposomal EGCG-pretreated groups. This indicates a protective effect of EGCG on pancreatic cell function. Meng et al. showed that EGCG can inhibit inflammation by reducing reactive oxygen species and downregulating the production of inducible nitric oxide synthetase (iNOS) [56]. Furthermore, EGCG increases glucose tolerance [57] and decrease HbA1c levels in STZ-induced diabetes in rats, contributing to further prevention of diabetic complications [58]. Another suggested mechanism of EGCG's protective effect is the increased glucose uptake due to promoting the glucose transporter-4 (GLUT4) translocation in skeletal muscle, through activation of both phosphoinositol 3-kinase and AMP-activated protein kinase pathways [58]. EGCG also increases tyrosine phosphorylation of insulin receptors, having an insulin-like effect on H4IIE hepatoma cell lines [59].
The liver is extremely adversely affected in type 1 diabetes mellitus. In our study, we found elevated AST and ALT levels, showing liver damage, in STZ diabetic rats (Table 1, Figure 3). In STZ-induced diabetes, transaminases elevation is the consequence of the toxic effect of STZ on hepatocytes, which induces lipid peroxidation, oxidative stress enhancement, peroxisome proliferation, and mitochondrial dysfunction [60][61][62]. Rodriguez et al. identified increased NO levels and hepatic oxidative stress in STZ-induced diabetic rats [63]. In our study, pretreatment with EGCG decreased ALT levels, preventing hepatic damage induced by STZ. Furthermore, liposomal EGCG administration significantly reduced AST and ALT values, confirming the enhanced protective effect of L-EGCG on hepatic cells. Other studies also demonstrated the hepatic-protective effect of green tea extracts in hepatic injury reflected by decreased serum transaminase levels, and improved structural changes in histopathological examination [64]. Moreover, long-time consumption of EGCG (in healthy Wistar rats) decreases age-induced hepatic damage by lowering the ALT and AST serum levels and improving microscopic changes of the liver tissue due to the aging process [65].
Effect of EGCG on Oxidative Stress Parameters and Plasmatic Antioxidant Capacity
In this study, increased levels of MDA, NO, and TOS were observed in diabetic rats (Table 1 and Figure 1), together with low levels of antioxidant biomarkers such as thiols, catalase, and TOS (Table 1 and Figure 2). Pretreatment with EGCG and L-EGCG induced protection against STZ toxic effects, as demonstrated by reduction of oxidative stress parameters (Table 1, Figure 1) and by enhancement of antioxidant defense (Table 1, Figure 2), with best results for the liposomal form. STZ-induced diabetes in experimental models is followed by an enhanced production of reactive oxygen species (ROS) and consumption of cell antioxidant systems, as a consequence of necrotic and apoptotic degeneration of pancreatic β cells [66,67]. Hyperglycemia itself is another factor generating intracellular ROS [68]. Oxidative stress (by excessive ROS production, auto-oxidation of glycated proteins, and increased lipid peroxidation) and decreased antioxidant capacity (free radical scavengers and enzymatic systems) are also involved in the pathogenesis of diabetic complications [69][70][71][72].
Green tea component EGCG is a flavonoid with antioxidant and anti-inflammatory properties conferred by its particular structure, a flavanol core and two gallocatechol rings, which are able to bind metal ions and scavenge free oxygen radicals. As a consequence, EGCG exerts direct antioxidant effects (scavenger of ROS and cheater of metal ions), but also indirect antioxidant effects (inductor of antioxidant enzymes, such as catalase, and inhibitor of oxydases, such as NADPH-nicotinamide adenine dinucleotide phosphate, lipoxygenase, or xantin-oxydase) [73]. Anti-inflammatory effects of EGCG were also related to the increase of circulating levels of interleukin-10 (an anti-inflammatory cytokine) in nonobese diabetic mice [14]. EGCG can decrease lipid peroxidation in the liver, kidney, and brain, and reduce lymphocyte DNA damage in diabetic mice [74].
EGCG has low bioavailability which can be modified by incorporation in special drug delivery systems. Because of its highly lipophilic nature, EGCG is suitable for incorporation in liposome nanoparticles, composed of phospholipid bilayers. Minnelli et al. showed that pretreatment of adult retinal pigmented epithelium (ARPE) cells with EGCG encapsulated in magnesium liposomes increases the survival of cells exposed to hydrogen peroxide (H 2 O 2 ), with better preserved mitochondria structure on electron microscopy examination, showing the superior antioxidant activity of L-EGCG compared with free EGCG [75]. In this regard, natural antioxidant products could be a promising therapeutic option for prevention of diabetes mellitus and its complications, conferring protection against oxidative damage by liposomal nanostructure encapsulation [69].
EGCG Effect on Matrix Metalloproteinases
In the present study, serum levels of MMP-2 and -9 increased after DM induction and were better modulated by L-EGCG (Table 1 and Figure 4). In experimental models of DM, increased MMP-2 expression and activity were linked to elevated ROS levels and oxidative stress, with consecutive pancreatic beta cell apoptosis, showing MMP-2 s important role in DM pathogenesis [76]. Thus, inhibition of intracellular MMP-2 expression is an essential target for beta cell protection and DM prevention. There is also a postulated connection between MMP production and inflammatory process and proinflammatory cytokine production associated with DM. Chemokines such as MCP-1 and NF-kB can induce MMP overproduction in DM [77]. After their secretion as inactive forms, proinflammatory molecules contribute to further transformation of MMPs in active forms by different proteases that are implicated in their cleavage [38]. MMPs are also involved in regulation and duration of immune response, endothelial cell function, vascular smooth muscle migration and proliferation, Ca 2+ signaling pathways, and vessel contraction, all of these consistently influencing vascular remodeling in DM [78,79].
Activated inflammatory cells such as leucocytes can contribute to endothelial cell dysfunction and vascular damage by direct and indirect pathways. Indirect loops comprise augmentation of MMP production by proinflammatory cytokines synthesized in activated leucocytes [70].
Activation of MMP-2 and MMP-9 is important in pathogenesis of diabetic microangiopathic complications such as diabetic retinopathy, nephropathy, and neuropathy [39]. Diabetic retinopathy, by inducing apoptosis of retinal endothelial cells and by degrading the junction proteins, is followed by increased vascular permeability [80,81]. In experimental models of DM, increased oxidative stress activates MMP-2, and antioxidant therapies inhibit the development of diabetic retinopathy by modulating retinal MMP-2 levels [32,82]. Diabetic nephropathy, one of the most severe microangiopathy in diabetes mellitus, is also characterized by MMP overexpression and accelerated ECM degradation, both being a hallmark of associated histopathologic changes [30]. MMPs' increased synthesis can also lead to neuronal injury through blood-nerve barrier (BNB) disruption, contributing to the neuropathic pain associated with diabetic neuropathy [83,84].
The multiple and complex roles exhibited by MMPs are explained by their multiple localizations. MMP-2 and MMP-9 are colocalized in vessel walls and atherosclerotic plaque, being involved in endothelial dysfunction and DM macrovascular complication and vascular remodeling [85,86]. Wang et al. reported a protective effect of EGCG after i.p. administration, by reducing the plasma levels of TNF-α, IL-6, and monocyte chemoattractant protein-1 (MCP-1) [38]. There is also evidence that EGCG can inhibit MMP-2 activation [87]. Multiple compounds of green tea can inhibit MMP-2 and -9, but the most efficient ones proved to be EGCG and epigallocatechin (EGC) [88]. Therefore, we chose the EGCG compound for our experimental study. Moreover, liposomal encapsulation brings an increased bioavailability with better results in reducing oxidative stress biomarkers and MMP plasma level. EGCG reduces MMP-2 activity by targeting the fibronectin type II repeated regions 1 and 3 of MMP-2, binds the amino acids that constitute the exosite of this enzyme, and hinders proper positioning of the substrate [89]. Due to its antioxidants effects and inhibitory action on the protein tyrosine kinases, EGCG reduces MMP-9 activity by reducing its release from the activated neutrophils [90].
From our knowledge, this is the first experimental study addressing liposomal EGCG effects in experimental DM induced by STZ in rats. Decreasing the hepatic and pancreatic damage due to STZ administration is a valuable effect of liposomal EGCG.
Potential Limitations of the Study
No measurements of EGCG and L-EGCG in the blood or pancreatic and hepatic tissue were done in this study since such quantifications were outside of our aim. Future studies could be conducted to measure the concentration of EGCG and L-EGCG in the blood and tissues. Moreover, oxidative stress parameters and MMPs could be measured in liver and pancreas tissue. Another limitation of our study is that the evaluation of endogenous insulin levels and measurement of HOMA-IR for endogenous pancreatic function were not performed.
Future studies should also investigate the effects of long-term administration of EGCG and L-EGCG on DM and its complications, as this study was focused on assessing their effects 48 h after DM induction.
Conclusions
L-EGCG pretreatment reduces oxidative stress biomarkers and MMP plasma levels 48 h after DM induction. Further studies are needed to detect other particularities regarding the EGCG protective mechanisms in order to improve their therapeutic efficiency. Due to the beneficial effects of EGCG nanoformulation proven by this study on oxidative stress, antioxidative defense, and MMP-2 and -9, we propose that L-EGCG could be considered as a novel adjuvant therapy in DM management.
|
v3-fos-license
|
2017-05-05T08:55:28.110Z
|
2016-07-19T00:00:00.000
|
12317653
|
{
"extfieldsofstudy": [
"Medicine",
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2016.01064/pdf",
"pdf_hash": "7ec12c2741867b0ae69cca5ddabb3ebc3e5c47c5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46494",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "6d55c99a74a820b1bdba6aab2a0fda80b89a4ecf",
"year": 2016
}
|
pes2o/s2orc
|
Get Your Facts Right: Preschoolers Systematically Extend Both Object Names and Category-Relevant Facts
There is an ongoing debate over the extent to which language development shares common processing mechanisms with other domains of learning. It is well-established that toddlers will systematically extend object labels to similarly shaped category exemplars (e.g., Markman and Hutchinson, 1984; Landau et al., 1988). However, previous research is inconclusive as to whether young children will similarly extend factual information about an object to other category members. We explicitly contrast facts varying in category relevance, and test for extension using two different tasks. Three- to four-year-olds (N = 61) were provided with one of three types of information about a single novel object: a category-relevant fact (‘it’s from a place called Modi’), a category-irrelevant fact (‘my uncle gave it to me’), or an object label (‘it’s called a Modi’). At test, children provided with the object name or category-relevant fact were significantly more likely to display systematic category extension than children who learnt the category-irrelevant fact. Our findings contribute to a growing body of evidence that the mechanisms responsible for word learning may be domain-general in nature.
INTRODUCTION
Is our capacity for language the product of dedicated mental processes, or an assembly of broader cognitive mechanisms operating in unison? This question is at the heart of understanding how children learn to use and comprehend language. In the case of vocabulary development, the child requires the ability to map words to their referents. The problem, in theory, is that there is an infinite number of possible meanings for any given word (Quine, 1960). Yet, in practice, young children learn words with remarkable ease. By the age of seventeen, the average English-speaker knows more than 60,000 words (Bloom, 2000). Does this remarkable development require mental processes specific to the task of word learning?
A longstanding perspective has been that domain-specific 'constraints and biases' are necessary for solving the inductive difficulty of word learning (e.g., Golinkoff et al., 1994;Waxman and Booth, 2000). Others argue that domain-general processes are sufficient for the task, whether basic properties of learning, memory, and attention (e.g., Samuelson and Smith, 1998;Smith and Yu, 2008) or social pragmatic understanding (e.g., Baldwin and Tomasello, 1998). These contrasting perspectives need not be in opposition; children may be flexible using multiple cues and processes of differing specificities when word learning (Hirsh-Pasek et al., 2000;Yu and Ballard, 2007). However, the same word-learning behavior can often be explained in different ways. To give one example, toddlers display a 'mutual exclusivity' response where they typically select a novel, name-unknown object as the referent of a novel label, rather than a familiar, name-known object. This behavior could be the outcome of a dedicated word-learning constraint (e.g., Mervis and Bertrand, 1994), or it may involve a more domain-general attentional bias toward novel stimuli (Horst et al., 2011;Mather and Plunkett, 2012).
If word learning relies on domain-general cognitive mechanisms, then one would expect to observe parallels between the formation and retention of word mappings, and the mapping of other information. It is not just words which can be mapped to objects; other information such as associated actions, gestures, and facts about an object also require mapping. If the same behavior is evident when learning about different types of mappings, the parsimonious conclusion is that common processing mechanisms are in operation. Previous research has provided evidence that there are parallels between word learning and the mapping of actions to objects Tomasello, 2002, 2003;Riggs et al., 2015;Dysart et al., 2016). In this paper, we focus on investigating whether there are similarities in the mapping of names and facts to objects.
Previous research has demonstrated that toddlers and preschoolers may be able to rapidly map novel nouns to objects, even with relatively brief exposure (e.g., Carey and Bartlett, 1978;Heibeck and Markman, 1987;Woodward et al., 1994;Jaswal and Markman, 2003;Holland et al., 2015), although it is less clear how well these mappings are retained over time (see Horst and Samuelson, 2008;Vlach and Sandhofer, 2012). A study by Markson and Bloom (1997) investigated whether the 'fastmapping' of words to novel objects stretches to the mapping of facts. In their procedure, pre-schoolers were introduced to a novel word (e.g., "Let's measure the koba") for an unfamiliar object and also a novel fact (e.g., "We can use the thing my Uncle gave to me") for another object. The children successfully mapped and retained both the word and the fact for the object across a retention interval of up to a month. The children performed similarly when the fact also contained a novel label (e.g., ". . .came from a place called Koba"). Markson and Bloom (1997, p. 813) concluded that the specific process of fast-mapping is not limited to words. More controversially, they further claimed to have "evidence against a dedicated system for word learning in children. " Markson and Bloom's case for a domain-general view of word learning was disputed by Waxman and Booth (2000). In their paper, they emphasize that word learning comprises a variety of different processes, of which fast-mapping is just one. Hence, Markson and Bloom do not have empirical evidence that all aspects of word learning are the outcome of domain-general processes. Waxman and Booth (2000) went on to investigate the domain-specificity of another word learning process: the extension of object names to other category members. It is wellestablished that toddlers and preschoolers will systematically extend a novel count noun to other members of the same object category (e.g., Markman and Hutchinson, 1984;Landau et al., 1988). Some (e.g., Markman, 1989;Booth and Waxman, 2002) argue that young children make an assumption about category membership -extending a word to other members of that category. Others argue that young children have learned to use shape-similarity as a reliable cue for count noun extension (e.g., Smith et al., 1996Smith et al., , 2002. Waxman and Booth (2000) investigated whether preschoolers would similarly extend newly learnt facts about objects. Preschoolers were taught either a novel word, naming a novel object ("It is called a koba"), or a novel fact ("My Uncle gave it to me") about a novel object. The children were then tested for categorical extension of the word or fact, either immediately after training, or after an interval of a week. Children were presented with the original target, two additional target-category members, and five other pairs of non-target exemplars. There were two freechoice tasks to test for extension: a 'yes/no' task and a 'choice' task. In the yes/no task, each object was presented and the child was asked whether the word or fact (depending on condition) applied to the object. In the choice task, all test objects were presented together and the child was asked either "Can you hand me the one that is a koba?" (Word condition) or "Can you hand me the one that my Uncle gave to me?" (Fact condition). Once a choice was made, the selected object was removed, and the question was repeated until no further selections were made. Performance on the two category extension tasks varied significantly between the word and fact conditions. Children in the word condition displayed completely systematic extension to only the target-category members in the yes/no and choice tasks, both immediately and after a delay. In contrast, children in the fact condition under-extended to target-category members and over-extended to non-target objects. Hence, there appears to be a clear pattern of categorical extension of words which does not occur for the extension of facts. Waxman and Booth (2000) argue on the basis of these results that there are different processes involved in the extension of words and facts. Thus, word learning may not involve purely domain-general processes as proposed by Markson and Bloom (1997). This differential category extension of words and facts has also been reported for children as young as 2.5 years of age (Behrend et al., 2001).
However, there is an important difference between the words and facts used in this previous research. The novel words are count nouns, as indicated by their grammatical form, e.g., 'This is a koba.' Importantly, a count noun applies not just to the originally labeled exemplar, but also to all other members of the relevant category. However, it is far from clear that the novel facts tested share this property of applying to all members of the labeled category. The fact 'My Uncle gave it to me, ' used by Waxman and Booth (2000), would normally be interpreted as applying only to that particular item, rather than as a fact which applies to other category members (see also Childers and Tomasello, 2003). Moreover, the pragmatic context (interpreting an unfamiliar experimenter) may leave children uncertain over whether or not to extend the new fact, creating inconsistent or ad hoc patterns of extension. A similar argument can be made about the facts used by Behrend et al. (2001), such as 'the thing that fell in the sink.' Therefore, differences between the facts and words employed in this research may simply reflect the kind of facts used, rather than a fundamental difference between facts and words. A fairer comparison with count nouns requires facts which are more readily interpreted as relevant to category membership, i.e., facts which do not concern unique or accidental properties of a specific item.
Other studies have distinguished between facts which apply only to individual objects and facts which apply to a category of objects. In an experiment reported by Diesendruck and Bloom (2003), three-year-olds were provided with either categoryrelevant facts (e.g., 'it is used in the kitchen') or categoryirrelevant facts (e.g., 'I got this for my birthday') about novel objects. The children were then presented with a forced-choice extension test. Children had a choice to extend the new fact to another object which was either a shape match, a color match, or a material match to the original referent. Children presented with category-relevant facts were significantly more likely to make shape-based extensions, than children provided with categoryirrelevant facts. However, the bias to make a shape match was significantly above chance for children in both conditions. At first sight, the children appear to be extending facts which should be restricted to individual objects. Yet, the use of a 'forced-choice' procedure means that the children are required to extend the fact to one of the three objects. As suggested by Diesendruck and Bloom (2003), children in this condition may have defaulted to an 'extend by shape' strategy, given that shape is otherwise a reliable cue for extension. Moreover, the same argument can be made for the category-relevant facts -did the children really want to make a category extension, or was it merely an artifact of the testing procedure?
Finally, Deák and Toney (2013) observed that 4-to 5-year-olds extend novel facts, in apparent contradiction to the findings of Waxman and Booth (2000). Deák and Toney (2013) state that their facts are 'neutral' with respect to being category-relevant. However, some of the facts used are arguably specific to unique exemplars (e.g., 'my sister gave this to me, ' 'I keep this on my desk'), whereas others are more likely to apply to object categories (e.g., 'this is from Japan'). Deák and Toney (2013) do not report data separately for each fact; hence it is unclear whether or not the children were discriminating between them on the basis of category relevance.
In sum, given the disparate findings across studies, we aim to clarify whether or not facts are systematically extended to other members of a category, and furthermore, what kinds of facts might be extended. Evidence for the categorical extension of both facts and nouns would provide further support for the domain-generality of word learning. Presently, there is ambiguity in the literature over how facts are classified with regards to category relevance. In the experiment reported below, we clearly distinguish between facts varying in category relevance, similar to Diesendruck and Bloom (2003). This manipulation allows us to test for patterns of extension in the predicted direction. Crucially, children's discrimination of different types of facts would provide more convincing evidence that category-relevant facts are truly classified and extended as such, rather than the outcome of an ad hoc strategy. We additionally contrast two kinds of facts with count nouns. Unlike Diesendruck and Bloom (2003), we use tests of category extension similar to Waxman and Booth (2000) -a free choice task, and a yes/no task. This testing procedure does not force children to extend the fact or word to at least one item. Therefore, all extension we do observe is a true reflection of children's preferences. We hypothesized that there would be no significant difference in the extension of nouns and categoryrelevant facts, but both would be significantly greater than the extension of category-irrelevant facts.
Participants
A total of 73 three-to four-year-olds originally participated and were tested for comprehension. Of these, 61 children correctly identified the original referent of the noun or fact, and went on to be tested for extension. In the Object Label condition, there were 19 children (9 male, 10 female) with a mean age of 3.93 years (range = 3.34-4.94). In the Category-irrelevant condition, there were 21 children (9 male, 12 female) with a mean age of 3.92 years (range = 3.21-4.94). In the Category-relevant condition, there were 21 children (6 male, 15 female) with a mean age of 3.84 years (range = 3.20-4.52). There were no significant differences in age, gender or vocabulary across conditions for tests of comprehension or extension (all ps > 0.4).
Exposure Array and Comprehension Test Array
The Exposure Array, presented to the children during the Exposure Session, comprised ten objects -six novel and four familiar (See Figure 1). Children's' comprehension of the link between the object and the novel object label or fact was tested using the same array. The novel objects were sourced from a large DIY store and will be referred to as a connector, double pipe clip, elbow, pipe collar, hose clip, and pipe clip. The four familiar objects were a pink teddy, a red sock, a blue pen, and a green duck. During the Exposure Session the objects were placed upon a plain white towel.
Extension Array
The Extension Array (see Figure 2) presented to children during the Extension Test comprised 12 novel objects. There were six pairs of two exemplars of each of the novel objects from the exposure array. These exemplars shared the same shape with the original exposure objects, but differed in color and/or size. The extension array did not include the original target object (cf. Waxman and Booth, 2000) to ensure an equal number of exemplars for each object category. The presence of the original target could otherwise bias selection of the target category.
Design
This study consisted of a between-participants experimental design. A single independent variable of Information Type was manipulated to vary the novel information provided about the novel object. There were three Information Type conditions: Object Label, Category-irrelevant, and Category-relevant. All children were tested immediately after the exposure session. There were two dependent variables: Comprehension Accuracy and Extension Accuracy.
Procedure
Ethical approval was granted by the ethics committee of London Metropolitan University. Informed consent was obtained from parents and the head of the nursery school. The procedure was based on Waxman and Booth (2000). All children initially underwent a fast mapping task, where each child was introduced to a novel count noun or novel fact in the exposure session.
Their comprehension and extension of this novel word or fact was assessed in the testing session that immediately followed the exposure session.
Exposure Session
Each child sat down at a table where a white towel was laid out. They were presented with a transparent box containing six novel objects and four familiar objects (See Figure 1). They were asked to get all ten objects out of the box and put them on the table. This ensured that the children touched and looked at each object, for roughly equivalent amounts of time. Following this brief introduction to the objects, the experimenter started the main task. The experimenter said, "Look. Here I have a towel" and moved the ten objects to the side of the towel. Then the experimenter said, "I want to put all of these things onto my towel so that it makes a fun picture. Can you show me where to put them so it looks really good? We'll do it one at a time so, we don't miss any out." The experimenter picked up one of the ten objects and said, "Let's start with this one." After the child had placed the object somewhere on the towel, the experimenter praised the participant and picked up another object asking, "Where would you put this one?" The experimenter continued this process with each of the objects asking, "And how about this one?" waiting for the child to place the object before going on to the next one. Objects were chosen at random except that the target object was never first or last. For all conditions, each of the novel objects served as the target object in rotation across participants.
The experimenter introduced some new information about the target object. In the Object Label condition, the experimenter said "This is really special -it's called a modi -where do you want to put this one?" In the Category-irrelevant condition, the experimenter said "This is really special -my uncle gave it to mewhere do you want to put this one?" In the Category-relevant condition, the experimenter said "This is really special -it's from a place called Modi -where do you want to put this one?" Once all the objects had been placed on the towel the experimenter said, "That's brilliant, thank you. I think that looks really great. What do you think? Are you happy with it?" and children were allowed to change the position of any of the objects if desired.
Comprehension Test
All children took part in the Comprehension Test session directly following the Exposure Session. Depending upon when the target object was presented during the exposure session, the child experienced a gap between exposure to the word or fact mapping and subsequent comprehension testing that ranged from approximately 30 s to 2 min.
Referring to the array of ten objects the experimenter said to the child, "We're going to put these away." The experimenter then asked one of three questions: "But just before we do, can you show me which one is called a modi?" (Object Label condition) ". . ..can you show me which one my uncle gave to me?" (Categoryirrelevant condition) or ". . ..can you show me which one comes from a place called Modi?" (Category-relevant condition). Their answer confirmed whether they had retained the word or fact mapping.
The experimenter ended the Comprehension Test by saying, "Can you help me by putting the things away now? They all go back in the box." For children who did not choose the target object, this was the end of their participation in the experiment. Children who answered the comprehension test correctly were tested for extension of the newly learned word or fact to additional objects from target and non-target categories.
Extension Test
Once the Exposure and Comprehension Test Array had been tidied away, the experimenter opened the transparent box containing the Extension Array (See Figure 2) and placed the twelve objects randomly on the table in front of the participant. All children underwent two extension tests similar to those used by Waxman and Booth (2000). There was a Yes/No task and a Choice task. To control for order effects the presentation of these tasks was counterbalanced.
Yes/No task
The experimenter pointed to each object in turn, in a random order, and asked, "Is this one a modi?" (Object Label condition), "Is this one my uncle gave to me?" (Category-irrelevant condition) or "Is this one from a place called Modi?" (Categoryrelevant condition).
Choice task
The experimenter asked, "Can you see anything here that's called a modi?" (Object Label condition), "Can you see anything here that my uncle gave me?" (Category-irrelevant condition) or "Can you see anything here that comes from a place called Modi?" (Category-relevant condition). After the participant's initial selection, the experimenter removed that object and prompted the child for additional selections. For example, in the Object Label condition, the experimenter said, "Are there any other ones that are modis?" As in Waxman and Booth (2000), the experimenter repeated this choice question until the child did not select any other objects.
Vocabulary Test
Children's vocabulary was tested approximately 1 week after the comprehension and extension testing using The British Picture Vocabulary Scale: Third Edition (BPVSIII) scale (Dunn et al., 2009).
Scoring
For all conditions, selecting target objects earned a positive score, selecting a non-target object earned a negative score, whilst non-selection scored zero points across both extension tests. Therefore, selecting only the target objects earned maximum points. Scores for selecting target objects were weighted, as there was a far larger proportion of non-target (10) to target objects (2) -a ratio of 5:1. If random performance is what participants would pick with their eyes shut, irrespective of how many picks they make, they are five times more likely to pick a foil than a target.
Yes/No
'Yes' responses received a score of +5 for the target exemplars and −1 for the non-target category objects. 'No' responses received a score of 0. This produced a score ranging from +10 to −10 for each child.
Choice
Selecting objects from the target category received a score of +5, selection of the non-target objects received a score of −1 and objects that were not selected received a score of 0. This produced a score ranging from +10 to −10 for each child.
Comprehension Accuracy
Of the 73 participants, 61 answered the comprehension question correctly by choosing the target object. As expected when tested immediately, a high proportion (79-88%) of the children could demonstrate their understanding of the mapping between the target object and the novel word or fact (See Table 1). There was not a significant difference in comprehension accuracy across conditions, χ 2 = 0.664, p = 0.798 (Fisher's Exact Test). Binomial tests showed that performance was greater than expected by chance (1 in 6) in all three Information Type conditions (p < 0.001).
Extension Accuracy
The 61 children who correctly chose the target in the comprehension test, also completed the extension test. Preliminary analyses of differences across the Information Type conditions revealed very similar patterns of data for the Yes/No and Choice tasks. These similarities were confirmed by significant correlations between the tasks by condition (all rs ≥ 0.45, ps ≤ 0.031). For brevity, we therefore report the analysis of scores aggregated across the two tasks. The weighted extension scores for each test were added together to provide a total weighted extension score for each participant, which could range in integers from minus 20 (exclusively selecting non-target exemplars) through zero to a maximum of plus 20 (exclusively selecting target exemplars). This aggregate score provides a complete picture of how the participant performed over both tests and does not obscure any inconsistent response patterns between the two tests.
The total weighted extension scores are presented in Figure 3. The children in the Object Label condition (M = 16.42, SD = 6.68) and the Category-relevant condition (M = 14.67, SD = 6.73) tended to choose objects from the target category at a greater rate than children in the Category-irrelevant condition (M = 7.33, SD = 8.91). A One-Way ANOVA revealed a significant effect of Information Type on children's extension of novel words and facts, FWelch (2,58) = 6.938, p = 0.003 (Welch's F statistic for unequal variances). Post hoc multiple comparison tests (Games-Howell correction) revealed a significant difference between the Object Label and the Category-irrelevant conditions (p = 0.002) and, crucially, a significant difference between the Category-relevant and Category-irrelevant conditions (p = 0.013). There was no significant difference between the Category-relevant and Object Label conditions (p = 0.696).
Individual Extension Patterns
The above analyses have revealed lower scores for children in the Category-irrelevant condition than for children in either the Object Label or Category-relevant conditions. To better understand why this difference occurred, every child's performance was classified into one of three primary response patterns for both extension tasks. A 'target category only' extension pattern described participants who selected both exemplars of the target category, but no other test objects.
An 'extend to all' extension pattern described participants who selected at least 11 of the 12 test objects. An 'inconsistent' extension pattern described participants who selected objects from the target and non-target categories of objects. Only two participants selected no test objects at all within a task, and neither did this for both tasks. Three participants' did not select the target category but their selection was limited to a single non-target category. Only one of these three participants replicated this pattern of responding across both the Yes/No and Choice tasks. The remaining selection patterns were seemingly random. Hence, all these extension patterns were summarized as 'inconsistent.' Children's extension patterns for each extension task are presented in Table 2. The Object Label and Category-relevant conditions exhibited similar extension patterns -most children (67-79%) extended only to the target category in both the Yes/No and the Choice tasks. In contrast, less than 40% of children in the Category-irrelevant condition extended only to the target category in either task. The remaining children in the Categoryirrelevant condition were split fairly evenly between the 'Extend to All' and the 'Inconsistent' extension patterns. A 3 × 3 χ 2 test on the Yes/No data demonstrates a significant relationship between condition and extension pattern, χ 2 = 15.346, p = 0.003 (Fisher's Exact Test). The Choice data provide similar results, χ 2 = 10.574, p = 0.026 (Fisher's Exact Test). Collapsing the extension patterns into two categories: 'Target Category Only' and 'Other' allows post hoc multiple comparisons using 2 × 2 χ 2 tests to explore differences between pairs of conditions. The adjusted critical p-value of 0.025 reflects the fact that the data for each condition in the post hoc comparisons has been analyzed twice. There was a significant difference between the children's extension pattern in the Category-irrelevant and the Category-relevant conditions in the Yes/No test, χ 2 = 6.462, p = 0.011 and in the Choice test, χ 2 = 6.222, p = 0.013. In contrast, a comparison of Object Label and Category-relevant for both the Yes/No task (χ 2 (1) = 1.200, p = 0.273) and the Choice task (Fisher's Exact Test, p = 0.698) reveals that children do not extend general facts in a significantly different way from object labels.
DISCUSSION
The current experiment compared children's extension of an object label and two different kinds of facts. The specific fact was relevant to an individual object ("My uncle gave this to me"), whereas the general fact was relevant to the category from which the object came ("It comes from a place called Modi"). Following Waxman and Booth (2000), we used extension tasks which allowed children to freely decide whether or not to extend the word or fact (cf. Diesendruck and Bloom, 2003). It was found that children's extension pattern varied as a function of condition. Children in the Category-relevant and Object Label conditions displayed similar response patterns of exclusively selecting members of the target object category. In contrast, children in the Category-irrelevant condition were more likely to extend the specific fact to non-target category objects than children in either the Category-relevant or Object Label conditions. It would appear that if the fact is category-relevant rather than object-specific, children will systematically extend the fact to appropriate same-shaped objects. These results strongly suggest that children can extend a fact to other same-category items just like they do with words.
Our study is not the first to provide evidence of preschoolers' extension of facts to same-category exemplars. However, we provide a more stringent demonstration that young children are capable of identifying a novel fact as category-relevant, and to spontaneously extend the fact to other category members. In contrast to Diesendruck and Bloom (2003), the use of a free choice procedure means that the children were not forced to select an object during the extension test. Furthermore, we explicitly compare children's responses to facts varying in category relevance. Thus, we have evidence that the extension of facts is actually sensitive to category relevance, rather than occurring as an indiscriminate, ad hoc strategy. Our findings extend the work of Diesendruck and Bloom (2003) and Deák and Toney (2013) and demonstrate that young children's readiness to extend category-relevant facts stands up to a more robust test of extension.
One issue, which remains, concerns the extension pattern of children in the Category-irrelevant condition. It is less clear why children in the Category-irrelevant condition chose to extend the fact to some (or all) of the target and non-target exemplars, when it arguably applies only to the originally designated object. Given the use of a free choice task, one might have expected the children not to have extended the fact at all. However, the experiment may have nonetheless placed pragmatic 'pressure' on the children's responses. For example, children may have thought that the experimenter would not ask the question if the answer was no. Moreover, with such a large array of test objects available, children might think it odd for the fact not to apply to at least some of the objects present. Alternatively, the children may have struggled to clearly classify the fact as either category-relevant or category-irrelevant, resulting in less coherent extension patterns both within and across participants. Children in the Category-irrelevant condition exhibited a larger number of inconsistent selection patterns. Our experiment was not designed to systematically investigate which foils were selected. This would be an interesting avenue for future research. An error analysis may provide some insight into children's extension choices for category-irrelevant facts. Booth (2000, 2001) have argued that the extension of facts may not display the same characteristics as the extension of nouns. They argue that while the appropriate extension pattern for nouns can largely be determined by grammatical form, the extension profile for facts is much less clear and often depends on broader world knowledge. While this might be true, Waxman and Booth point to differences between the nature of words and facts themselves, rather than differences in the mechanisms underlying extension. Arguably, a fact about the geographic origin of an object may not serve to define category membership in the same manner as an object label. Yet, we have shown that when a fact can be interpreted as having relevance to a category, it will be systematically extended to other category exemplars. Importantly, the children had no prior experience of the novel fact and how it is extended. Thus, the spontaneous and systematic extension of a novel fact suggests a general mechanism for extension, rather than case-by-case learning. Further research will need to establish whether, under suitable experimental conditions, facts which denote specific objects will be strictly restricted to the original referent.
A final caveat is that the fact introduced to children in the Category-relevant condition was "It comes from a place called Modi" which contains a novel non-word. Perhaps children are extending this fact, not because it's category-relevant, but because they associate the novel word with the target object. They may then extend this novel word, rather than the fact, to other similar-shaped objects. However, this interpretation is unlikely for two main reasons. First, other researchers have shown that children will not extend category-irrelevant facts when they do contain a novel word (Behrend et al., 2001) and, vice versa, children will extend category-relevant facts when they don't contain a novel word (Diesendruck and Bloom, 2003). Second, the category-relevant fact used in the present study introduces a novel proper noun, and pre-schoolers have been shown not to extend proper nouns (see Hall, 1999). So, if children in this experiment had linked the novel word rather than the fact with the object, they would have been more likely to extend the fact at significantly less than chance levels -neither to the target or non-target category objects. This was not the case.
So it would appear that children treat the extension of novel category-relevant facts and novel object labels that they have fast mapped in a very similar way. This does not necessarily mean that the same mechanism in the brain is used for extension of linguistic facts and words. However, arguing that there are two separate systems (for words and facts) determining whether a piece of information applies to an individual or category seems a less likely explanation, and certainly a more complex one. Furthermore, our findings are consistent with other studies demonstrating parallels between word learning and the mapping and extension of other types of information to objects Tomasello, 2002, 2003;Riggs et al., 2015). Thus, contrary to the view of Waxman and Booth (2000), the extension of words appears to be part of a more domain-general mechanism. The evidence here lends support to Bloom's (2000) theory that word learning is domain-general, drawing upon a variety of general cognitive processes in a unique way to form and retain word meanings.
AUTHOR CONTRIBUTIONS
AH conceived and designed the experiment with input from AS and KR. AH collected and analyzed the data. All authors contributed to the theoretical interpretation of the data. AH and EM drafted the manuscript with input from AS and KR. All authors approved the final version of the manuscript for publication.
|
v3-fos-license
|
2023-02-24T17:28:18.613Z
|
2023-02-01T00:00:00.000
|
257124268
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "f80e648fdb58fcefe7a973645644c708bd0ae249",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46496",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "567c1374f612d0c1fed10a37d07a3290f8a21201",
"year": 2023
}
|
pes2o/s2orc
|
Exploration of the Safety and Solubilization, Dissolution, Analgesic Effects of Common Basic Excipients on the NSAID Drug Ketoprofen
Since its introduction to the market in the 1970s, ketoprofen has been widely used due to its high efficacy in moderate pain management. However, its poor solubility and ulcer side effects have diminished its popularity. This study prepared forms of ketoprofen modified with three basic excipients: tris, L-lysine, and L-arginine, and investigated their ability to improve water solubility and reduce ulcerogenic potential. The complexation/salt formation of ketoprofen and the basic excipients was prepared using physical mixing and coprecipitation methods. The prepared mixtures were studied for solubility, docking, dissolution, differential scanning calorimetry (DSC), Fourier transform infrared spectroscopy (FTIR), in vivo evaluation for efficacy (the writhing test), and safety (ulcerogenic liability). Phase solubility diagrams were constructed, and a linear solubility (AL type) curve was obtained with tris. Docking studies suggested a possible salt formation with L-arginine using Hirshfeld surface analysis. The order of enhancement of solubility and dissolution rates was as follows: L-arginine > L-lysine > tris. In vivo analgesic evaluation indicated a significant enhancement of the onset of action of analgesic activities for the three basic excipients. However, safety and gastric protection indicated that both ketoprofen arginine and ketoprofen lysine salts were more favorable than ketoprofen tris.
Introduction
An estimated 40% of commercially available drugs and up to 90% of newly discovered drug candidates have poor water solubility [1,2]. As a result, the development of solubilization techniques, as well as the search for new hydrotropes and potential watersoluble excipients to enhance the solubility and dissolution rates of poorly soluble drugs, has been an ongoing endeavor for formulation scientists [3,4]. Ketoprofen ( Figure 1) is a non-steroidal anti-inflammatory drug (NSAID) that was discovered in 1968 [5]. It is the most commonly prescribed NSAID for various acute and chronic pain conditions, such as moderate to severe dental pain and osteoarthritis [5]. Ketoprofen is sold worldwide under different brand names, including as Orudis ® capsules in the USA and as the over-thecounter (OTC) medication Ketofan ® (25 mg immediate-release tablets and 50 mg capsules) on the Egyptian market. However, poor water solubility and dissolution rates of ketoprofen have resulted in erratic drug absorption and inconsistent bioavailability, especially in the first part of the gastrointestinal tract. As a weak acid, the solubility of ketoprofen in the acidic gastric fluid is minimal [6,7]. Numerous solubilization techniques have been employed to improve solubility and dissolution rates of different water-insoluble drugs. These techniques include particle size reduction, solid dispersion, complexation, salt formation, cocrystallization, and nanoparticle encapsulation [8][9][10]. In addition, many water-soluble excipients have been used to improve the solubility and bioavailability of poorly soluble drugs. These include watersoluble macromolecules and hydrophilic polymers, such as polysaccharides, polyvinylpyrrolidone, polyethylene glycol, and cyclodextrins [10]. While these excipients have successfully enhanced the solubility of many drugs, their solubilizing capacity can be limited and require that they be used in large amounts, which can raise toxicological and regulatory concerns [3,10]. Low-molecular-weight excipients, such as urea and sugars, have also been extensively investigated. However, their solubilizing capacity is limited due to both their chemical neutrality and their lack of sufficient binding sites and ionizable groups [11].
In recent years, there has been a growing interest in investigating and utilizing amino acids due to their safety and tolerability. Amino acids are classified as GRAS (Generally Recognized as Safe) and are used as dietary supplements [4]. In addition, amino acids have been successfully used to solubilize both ionizable and non-ionizable drugs. They are small molecules with diverse chemical structures, and can be broadly classified into mainly amphoteric (e.g., glycine and alanine), acidic (e.g., aspartic acid and glutamic acid), or basic (e.g., arginine and lysine) amino acids ( Figure 1). Additional side chains, such as hydroxyl and sulfhydryl groups, can boost their solubilizing capacity [4,12].
Ketoprofen-L-lysine can exist in salt or cocrystal forms, depending on the preparation method. Both forms have enhanced dissolution characteristics, but the bitterness scores for these two forms of ketoprofen-L-lysine were higher than that of the parent drug [13]. In another study, ketoprofen-tromethamine was prepared by a coprecipitation method, resulting in a new crystalline state with significantly enhanced solubility and dissolution rates [14].
Tromethamine (also known as tris aminomethane) is a basic excipient and a widely used buffering agent in biochemistry and protein assays. Tris has been used to form water-soluble salts from weak acids such as ketorolac and nimesulide [15].
This study explored the impact of three basic excipients (lysine, arginine, and tromethamine, or tris) with different basicity and pKa values (Figure 1) on the solubility, dissolution rates, and analgesic efficacy of ketoprofen, as well as ulcer side effects. The aim was to rank and showcase any particular advantages of these basic excipients in improving the biopharmaceutical properties and safety profile of the NSAID drug ketoprofen. Numerous solubilization techniques have been employed to improve solubility and dissolution rates of different water-insoluble drugs. These techniques include particle size reduction, solid dispersion, complexation, salt formation, cocrystallization, and nanoparticle encapsulation [8][9][10]. In addition, many water-soluble excipients have been used to improve the solubility and bioavailability of poorly soluble drugs. These include water-soluble macromolecules and hydrophilic polymers, such as polysaccharides, polyvinylpyrrolidone, polyethylene glycol, and cyclodextrins [10]. While these excipients have successfully enhanced the solubility of many drugs, their solubilizing capacity can be limited and require that they be used in large amounts, which can raise toxicological and regulatory concerns [3,10]. Low-molecular-weight excipients, such as urea and sugars, have also been extensively investigated. However, their solubilizing capacity is limited due to both their chemical neutrality and their lack of sufficient binding sites and ionizable groups [11].
In recent years, there has been a growing interest in investigating and utilizing amino acids due to their safety and tolerability. Amino acids are classified as GRAS (Generally Recognized as Safe) and are used as dietary supplements [4]. In addition, amino acids have been successfully used to solubilize both ionizable and non-ionizable drugs. They are small molecules with diverse chemical structures, and can be broadly classified into mainly amphoteric (e.g., glycine and alanine), acidic (e.g., aspartic acid and glutamic acid), or basic (e.g., arginine and lysine) amino acids ( Figure 1). Additional side chains, such as hydroxyl and sulfhydryl groups, can boost their solubilizing capacity [4,12].
Ketoprofen-L-lysine can exist in salt or cocrystal forms, depending on the preparation method. Both forms have enhanced dissolution characteristics, but the bitterness scores for these two forms of ketoprofen-L-lysine were higher than that of the parent drug [13]. In another study, ketoprofen-tromethamine was prepared by a coprecipitation method, resulting in a new crystalline state with significantly enhanced solubility and dissolution rates [14].
Tromethamine (also known as tris aminomethane) is a basic excipient and a widely used buffering agent in biochemistry and protein assays. Tris has been used to form water-soluble salts from weak acids such as ketorolac and nimesulide [15].
This study explored the impact of three basic excipients (lysine, arginine, and tromethamine, or tris) with different basicity and pKa values ( Figure 1) on the solubility, dissolution rates, and analgesic efficacy of ketoprofen, as well as ulcer side effects. The aim was to rank and showcase any particular advantages of these basic excipients in improving the biopharmaceutical properties and safety profile of the NSAID drug ketoprofen.
The specific objectives of the study included the formation of solid dispersions and physical mixtures, the construction of phase solubility diagrams, thermal and dissolution studies, spectral and docking analysis, analgesic evaluation using the writhing test in mice, gastric ulcer liability, and histopathological examination.
Materials
Ketoprofen was provided by Pharco Pharmaceuticals (Alexandria, Egypt). L-arginine was purchased from Fluka AG (Buchs, Switzerland). L-lysine, tris, and sodium lauryl Ketoprofen-L-lysine, L-arginine, and tris physical mixtures (PM) were prepared separately by weighing an equivalent molar weight in milligrams. The drug-excipient mixture was then thoroughly mixed in a porcelain dish for 2-3 min using a spatula and sieved through a 125-µm sieve. To prepare coprecipitated mixtures of ketoprofen with L-lysine, L-arginine, and tris, specific weights (in mg) equivalent to the molecular weight of ketoprofen were dissolved in 20 mL of methanol. Accurate weights (in mg) equivalent to the molecular weights of the basic amino acids (L-lysine and L-arginine) and tris were dissolved individually in 10 mL of distilled water. The methanolic solution of ketoprofen and the aqueous solutions of the basic excipients were mixed in a porcelain dish with a 100-mL capacity. The porcelain dish was placed on a hot plate stirrer (LabTech, Daihan, Korea), adjusted to 80 • C, and left until complete evaporation. The resulting powder was ground in a mortar and pestle and passed through a 125 µm sieve.
Equilibrium Solubility Studies
Excess amounts of ketoprofen were added to various solutions containing different concentrations of the basic excipients (0, 0.1, 0.2, 0.4, 0.5, 1, 2, and 3% w/v) of L-arginine, L-lysine, and tris. These mixtures were placed in a thermostatic shaking water bath (Shel Lab water bath, Sheldon Cornelius, OR, USA) at 37 • C ± 0.5 • C, rotating at a speed of 120 strokes per minute. The samples were left for 48 h to attain equilibrium; aliquots (4 mL) were withdrawn, filtered, and measured spectrophotometrically at λ max = 260 nm using a UV-visible spectrophotometer (JENWAY-Model 6305, Chelmsford, UK). The solubility data (µg/mL) were obtained from the standard calibration curve with acceptable linearity (R 2 = 0.9955). The solubility constant (K) was calculated from the slope of the phase solubility diagram obtained from the regression line of solubility (µg/mL) versus concentration (mM) plots [9,15], as shown in the following equation:
Differential Scanning Calorimetry (DSC) and Fourier Transfer Infrared Spectroscopy (FTIR)
Samples of ketoprofen, arginine, lysine, tris, physical mixtures (PM), and coprecipitated mixtures were weighed (2-4 mg) and placed in aluminum pans. The DSC Mettler Toledo Star System (Mettler Toledo, Zürich, Switzerland) was used to gradually increase the temperature from 30 to 300 • C at a rate of 10 • C/min, calibrated with an indium standard, and using nitrogen as a purging gas. A Thermo Scientific Nicole IS 10 FTIR spectrophotometer (Waltham, MA, USA) was used to compress potassium bromide samples into discs using a 10-ton hydraulic press. The samples were scanned 16 times from 400 to 4000 cm −1 , and data were collected using Omnic software from Thermo Scientific in Waltham, MA, USA.
In Vitro Dissolution
In vitro dissolution studies were conducted on two dissolution media. The first dissolution medium consisted of simulated gastric fluid (pH 1.2, 900 mL) containing 1% w/w sodium lauryl sulfate (SLS) for the first two hours. Then, in the same flask, the pH of the medium was increased to 6.8 using dibasic sodium phosphate for an additional three hours to simulate intestinal fluid. The dissolution media were agitated using USP apparatus 2 at 50 rpm and a temperature of 37 • C. Ketoprofen powder, PM, and Coppt dispersed mixtures weighing 20 mg (or equivalent to 20 mg of ketoprofen) were filled into hard gelatin capsules of size 0 (Isolab Laborgeräte, GmbH, Am Dillhof, Germany), placed in dissolution sinkers, and transferred to dissolution flasks. A 5 mL sample was withdrawn at specified intervals and replaced with another 5 mL of fresh dissolution medium. The samples were analyzed spectrophotometrically, as previously described in Section 2.3.
Molecular Docking
Molecular docking studies were performed with the Molecular Operating Environment (MOE) 2014.09 software (Chemical Computing Group, Montreal, QC, Canada) to predict the stability and possible orientation of various bases on the surface of ketoprofen. The 3D structure of ketoprofen was constructed using the builder interface, and its energy was minimized to an RMSD (root mean square deviation) gradient of 0.01 kcal/mol using the QuickPrep tool in the MOE software. Similarly, the 3D structures of arginine, lysine, and tromethamine were built using the MOE builder, and their energies were minimized. The three bases were docked onto the surface of ketoprofen using an induced-fit docking protocol with the Tri-angle Matcher method and dG scoring system for pose ranking. After a visual assessment of the resultant docking poses, those with the highest stability and lowest binding free energy values were selected and reported.
Writhing Assay
Mice weighing between 25 and 30 g were used in the experiment. The ability of ketoprofen and the prepared coprecipitated mixtures of ketoprofen with the three basic excipients (tris, L-lysine, and L-arginine) to inhibit acetic acid-induced writhing was assessed as previously described [11]. The mice were divided into five groups, as outlined in Table 1. A dose of 50 mg/kg or its equivalent was dispersed in an aqueous solution containing 0.25% carboxymethyl cellulose (CMC) to make the tested solutions (2 mg/mL). An accurate sample (0.5 mL) of the tested solutions was administered orally through a gastric tube. After the dose was administered, 30 µL of diluted acetic acid solution (0.6% v/v) was injected intraperitoneally into the animals. Induced writhes were counted for 20 min.
Indomethacin-Induced Ulcer
Male albino rats were fasted for 24 h and given access to water. They were divided into six groups of five rats each. The positive control group received a single oral dose of indomethacin (30 mg/kg) through a gastric tube, while the control group received saline. The remaining four groups were given a single oral dose of 50 mg/kg of ketoprofen or its equivalent in K:tris, K:lysine, and K:arginine Coppt mixtures. Four hours after dosing, the animals were sacrificed, and their stomachs were dissected, flushed with saline, and opened for inspection of ulcer formation [16].
The ulcers were counted and quantified by pinning the stomach on a piece of flat cork and scoring the ulcers using a dissecting microscope. The area of mucosal damage (ulcer) was expressed as a percentage of the total surface area of the mucosal surface of the stomach [16].
Histopathological Documentation
The dissected stomachs from the control and treated groups were fixed in 10% formalinbuffered saline for several days, dehydrated, embedded in paraffin blocks, and then sectioned into 5 µm-thick slices. As previously reported, the final sections were stained with H&E stain for microscopic examination and imaging [17,18]. The aggregation of polysaccharides was visualized using periodic acid Schiff (PAS) staining [19].
Solubility Studies
The results of the equilibrium solubility of ketoprofen in the presence of increasing concentrations (w/v %) of the three basic excipients (L-lysine, L-arginine, and tris) are shown in Figure 2A. Phase solubility curves of ketoprofen with the three basic excipients were constructed ( Figure 2B-D) to determine solubility type. The three basic excipients, tris, L-lysine, and L-arginine, at a concentration of 3% w/v, significantly (p < 0.05) improved the solubility of ketoprofen by 4, 4.65, and 6.8-fold, respectively. Arginine showed superior solubilization capacity compared to the other two basic excipients ( Figure 2A). This solubility enhancement can be attributed to the electrostatic interaction and alkalinizing effects of the stagnant diffusion layer around dissolved particles of the basic excipients in the solvent, which increased the ionization of the weak acid drug [20]. Arginine (pKa = 12.48) is the strongest basic excipient compared to tris (pKa = 8) and lysine (pKa = 10). Hence, the diffusion layer becomes more alkaline and more ionization occurs, favoring the solubilization of the weak acid ketoprofen. Figure 2B-D shows the phase solubility curves (solubility (mM) against concentrations (mM) of the basic excipients). Tris obtained solely a linear relationship (AL). In contrast, nonlinear relationships were observed with arginine and lysine. Similarly, the solubility of ketoprofen in the prepared physical mixtures (PM) an coprecipitated ketoprofen with tris, L-lysine, and L-arginine was enhanced (Figure 3 Ketoprofen's solubility increased significantly (p < 0.05), and this increase depended o the type of excipient and the preparation method of the solid dispersion. For exampl coprecipitated dispersed mixtures demonstrate superior enhancement in solubilit Similarly, the solubility of ketoprofen in the prepared physical mixtures (PM) and coprecipitated ketoprofen with tris, L-lysine, and L-arginine was enhanced ( Figure 3). Ketoprofen's solubility increased significantly (p < 0.05), and this increase depended on the type of excipient and the preparation method of the solid dispersion. For example, coprecipitated dispersed mixtures demonstrate superior enhancement in solubility compared to physical mixtures. Coprecipitated mixtures generate drug particles with less particle size due to the solvent effect, in addition to generating more intimate contact and interactions with the basic excipients; hence, higher solubility can be achieved [1]. Similarly, the solubility of ketoprofen in the prepared physical mixtures (PM) and coprecipitated ketoprofen with tris, L-lysine, and L-arginine was enhanced ( Figure 3). Ketoprofen's solubility increased significantly (p < 0.05), and this increase depended on the type of excipient and the preparation method of the solid dispersion. For example, coprecipitated dispersed mixtures demonstrate superior enhancement in solubility compared to physical mixtures. Coprecipitated mixtures generate drug particles with less particle size due to the solvent effect, in addition to generating more intimate contact and interactions with the basic excipients; hence, higher solubility can be achieved [1].
FTIR and DSC Studies
FTIR spectroscopy and DSC thermal analysis were used to detect possible physicochemical interactions between ketoprofen and the three basic excipients under investigation. Figure 4A shows the FTIR spectra of ketoprofen, the three basic excipients, and their physical and coprecipitated mixtures. Specific IR absorption bands of pure ketoprofen detected at 1610 cm −1 and 1684 cm −1 were due to stretching of the ketone group and the carboxylic carbonyl group (C=O), respectively [14]. The characteristic peaks at their assigned wavenumbers were simply additive to the FTIR spectra of the three basic excipients, indicating that no observable physicochemical interactions could be identified with the physical mixtures. In contrast, the vibrational bands of the keto and carbonyl groups of ketoprofen were broadened and shifted for ketoprofen and tris, ketoprofen and L-lysine, and ketoprofen and L-arginine, suggesting hydrogen bonding formation and electrostatic interactions with the basic/cationic excipients [21]. Similarly, DSC analysis revealed the complete disappearance of ketoprofen melting from both physical and coprecipitated ketoprofen mixtures with L-lysine and tris. This indicated the presence of both physicochemical and electrostatic attraction. In contrast, a weak melting transition was found in K:L-arginine PM, but a complete disappearance of K melting was observed in the coprecipitated mixtures.
Dissolution Studies
For drugs with poor solubility, determined dissolution rates are both a regulatory requirement and essential for distinguishing newly developed formulations. The dissolution medium should mimic physiological fluids and conditions [22]. To determine the exact amount of ketoprofen used for the in vitro dissolution study under sink conditions, equilibrium solubility was measured in different simulated gastric fluids (0.1 M HCl) containing three different concentrations (0.1%, 0.5%, and 1% w/v) of sodium lauryl sulfate as a surfactant. Sodium lauryl sulfate is an anionic surfactant that was selected because it mimics the anionic natural surfactants/bile salts in gastric fluid. To both prevent the surface flotation of drug particles and simulate in vivo performance, it is crucial to wet the dispersed particles prior to dissolution. The surface tension of gastric fluid is considerably lower than that of water, suggesting the presence of surfactants in this region [23]. Ketoprofen solubility was 75, 127.5, 150, and 190 µg/mL for HCl solutions containing SLS of 0.1%, 0.5%, and 1% w/v, respectively. Therefore, an acid dissolution medium with 1% SLS was selected to ensure sink conditions. This study adopted both acidic gastric conditions and a physiological pH of 6.8 to simulate intestinal pH. The first two hours of dissolution were studied at an acidic pH because the solubility of ketoprofen (a weakly acidic drug with a pKa of 4.4) was very low (0.1 mg/mL), the pH was significantly lower than the pKa of ketoprofen, and the drug was available in a unionized form. In contrast, the solubility of the drug at pH 6.8 (where the drug predominantly exists in ionized forms) was evaluated to determine the capacity of the three basic excipients to improve the dissolution rate under gastric conditions. Figure 5 shows the dissolution profiles of ketoprofen from the prepared physical and dispersed mixtures, and Table 2 presents three dissolution parameters: the time required for the dissolution of 50% of ketoprofen (T50%) and relative dissolution rates at 60 min and 300 min (RDR 60 and RDR 300 , respectively). Slow and incomplete dissolution was recorded for ketoprofen powder over 300 min, with only 50% of the drug dissolving in 240 min. In contrast, Ketofan ® capsules showed nearly doubled RDR 60 and RDR 300 dissolution parameters. Similarly, physical mixtures of ketoprofen with the three basic excipients enhanced the dissolution parameters by 1.26 to 1.74-fold and shortened the T50% value to 120 and 180 min, respectively, compared to the T50% of 240 min recorded for ketoprofen powder. Compared to physical mixtures, superior dissolution rates were recorded for coprecipitated mixtures. For example, K:tris, K:lysine, and K:arginine coprecipitates recorded T50% of 120, 60, and 30 min, respectively, compared to 180, 120, and 120 min estimated for K:tris, K:lysine, and K:arginine physical mixtures, respectively. These results indicated that the preparation technique of the dispersed mixtures made a marked difference in dissolution rates.
Furthermore, L-arginine and L-lysine appear superior to tris in terms of their capacity to improve the in vitro dissolution rates of ketoprofen. L-arginine (pKa = 12.48) is the strongest base compared to the other two basic excipients, L-lysine (pKa = 10.79) and tris (pKa = 7.8). The stronger the base, the faster the in vitro dissolution rate can be recorded. This is due to the faster alkalinization of the diffusion layer surrounding the drug particles, as well as the increasing ionization of the acidic drug in this diffusion layer [20]. Additionally, these results correlate well with the solubility studies that demonstrated the following order: L-arginine > L-lysine > tris.
Molecular Docking
Several methods could be utilized to establish the ability of the three basic excipients to form a salt with ketoprofen. Fundamentally, the difference in acid dissociation constants (pKa (base)-pKa (acid)) for ketoprofen and the three basic excipients is widely used to predict the possibility of cosolvation experiments producing a cocrystal or a salt. A pKa difference greater than 3 suggests a salt formation, while values less than 0 suggest that cocrystal is the predominant form [24][25][26][27]. An acidic pKa of 4.39-4.45 [28] for the propionic acid proton of ketoprofen gives a difference of 8, 6.4, and 3.4 with arginine, lysine, and tromethamine, respectively (Figure 1). This supports the previous findings of salt formation between ketoprofen and the three bases.
Additionally, Hirshfeld surface analysis, a tool for visualizing crystal structure interactions (Spackman & Jayatilaka, 2009), of ketoprofen crystals revealed that carboxylic oxygens are the most likely sites for interactions in ketoprofen (shown red areas in Figure 6A). The same conclusion was reached with theoretical docking of the three bases on the surface of ketoprofen using MOE software, suggesting the construction of small stable complexes, as shown in Figure 6B. In the case of arginine, the complexes created showed the proximity of the basic guanidine NH 2 with the highest pKa to the carboxylic group. The stability of such complexes, and their observed proximity, may favor the potential proton transfer between the ketoprofen acidic group and the guanidine amino group of arginine in a salt formation process. Recently, the salt formation between ketoprofen and tromethamine was confirmed [14]. A salt formation between ketoprofen and lysine was also described, substantiating our assumptions [13].
cles, as well as the increasing ionization of the acidic drug in this diffusion layer [20]. Additionally, these results correlate well with the solubility studies that demonstrated the following order: L-arginine > L-lysine > tris.
Molecular Docking
Several methods could be utilized to establish the ability of the three basic excipients to form a salt with ketoprofen. Fundamentally, the difference in acid dissociation constants (pKa (base)-pKa (acid)) for ketoprofen and the three basic excipients is widely used to predict the possibility of cosolvation experiments producing a cocrystal or a salt. A pKa difference greater than 3 suggests a salt formation, while values less than 0 suggest that cocrystal is the predominant form [24][25][26][27]. An acidic pKa of 4.39-4.45 [28] for the propionic acid proton of ketoprofen gives a difference of 8, 6.4, and 3.4 with arginine, lysine, and tromethamine, respectively (Figure 1). This supports the previous findings of salt formation between ketoprofen and the three bases.
Additionally, Hirshfeld surface analysis, a tool for visualizing crystal structure interactions (Spackman & Jayatilaka, 2009), of ketoprofen crystals revealed that carboxylic oxygens are the most likely sites for interactions in ketoprofen (shown red areas in Figure 6A). The same conclusion was reached with theoretical docking of the three bases on the surface of ketoprofen using MOE software, suggesting the construction of small stable complexes, as shown in Figure 6B. In the case of arginine, the complexes created showed the proximity of the basic guanidine NH2 with the highest pKa to the carboxylic group. The stability of such complexes, and their observed proximity, may favor the potential proton transfer between the ketoprofen acidic group and the guanidine amino group of arginine in a salt formation process. Recently, the salt formation between ketoprofen and tromethamine was confirmed [14]. A salt formation between ketoprofen and lysine was also described, substantiating our assumptions [13].
Writhing Assay
A writhing assay was employed to assess the onset of analgesic activities of the drug alone and the coprecipitated drug mixtures with three basic excipients (tris, L-lysine, and L-arginine) within 20 min. In the current study, both the number of writhes and percentage (%) of writhing inhibition were recorded for the untreated, ketoprofen-, K:tris Coppt-, K:Llysine Coppt-, and K:L-arginine-treated groups, as illustrated in Figure 7A,B. The number of writhes for the ketoprofen-treated group decreased from 46 to 32, with 30% inhibition. In contrast, the numbers of writhes recorded for the K:tris, K:L-lysine, and K:L-arginine coprecipitated mixture groups were 3, 8, and 10, respectively, with percentage inhibitions of 91%, 82%, and 78%, respectively. These findings suggest that these basic excipients have promising potential to quickly enhance analgesic activity when compared to the drug alone. This is due to their improved solubility and in vitro dissolution rates. Notably, it is worth mentioning that this in vivo study did not significantly correspond with previously mentioned in vitro dissolution studies, where L-arginine showed superior potential for enhancing both solubility and dissolution rates.
Indomethacin-Induced Ulcer
NSAIDs cause gastric toxicity, including gastric ulcers. Indomethacin, a commonly used NSAID, is often used as a model drug for inducing gastric ulcers in rats due to its high ulcerogenic index [16,35]. Indomethacin is a potent inhibitor of prostaglandin and can cause significant damage to the gastric mucosa [36]. This study aimed to determine if coprecipitated mixtures of ketoprofen and three basic excipients, which improved solubility and bioavailability, could reduce the gastrointestinal side effects of ketoprofen. Figure 8 shows stomachs pinned on corkboards to emphasize the location and number of ulcers in the negative and positive (indomethacin) groups, as well as the groups treated with ketoprofen and coprecipitated mixtures of ketoprofen and basic excipients. The indomethacin-treated group (the positive control) had the highest number of ulcers, with nine ulcers recorded. The number of ulcers in the ketoprofen group was reduced to about one-third of that of the indomethacin group, as indomethacin is more potent at causing gastric ulcers [36]. There were no statistically significant (p > 0.05) differences in the number of ulcers between the K:tris and ketoprofen groups.
Interestingly, the K:lysine and K:arginine coprecipitated mixtures produced significantly fewer ulcers than ketoprofen alone. These results are consistent with recent reports [5]. Ketoprofen lysine salt has been shown to reduce ulcer side effects compared to the acidic form of ketoprofen [37]. This is likely due to the residual amino groups of L-lysine and L-arginine, which act as carbonyl scavengers; they also offer protection against oxidative damage to the gastric mucosa by providing indirect antioxidant effects and increasing the levels of glutathione S-transferase P at the cellular level in the gastric mucosa [5]. Additionally, L-lysine and L-arginine have been reported to both enhance mucosal integrity and have gastroprotective effects through nitric oxide (NO) donation [37,38]. K:tris Coppt demonstrated statistically significant inhibition in the number of writhes compared to both K:L-lysine and K:L-arginine Coppt, while the latter two showed significant reductions in the number of writhes (8 and 10, respectively) and percentage inhibition (82% and 79%, respectively). However, no statistically significant differences (p > 0.05) were identified for either L-arginine or L-lysine in reducing the number of writhes. Similar results were reported elsewhere for the NSAID drug nimesulide [15]. Nimesulide tris Coppt outperformed the amorphous mixture of nimesulide and PVP K30 in terms of analgesic activity and time to onset of action [15].
In another study, the ketoprofen lysine salt demonstrated a more rapid and complete absorption than the acid form of ketoprofen. Peak plasma concentration for the ketoprofen lysine salt was attained in 15 min, compared to 60 min for the acid form [29]. Additionally, it was reported that the ketoprofen lysine salt demonstrated analgesic activity two times stronger than ketoprofen, as well as a higher LD 50 [30].
The writhing assay was also used to assess the onset of analgesic activity of nimesulide, a poorly soluble drug. Nimesulide alone inhibited writhing by approximately 22%. In comparison, the more water-soluble form of the drug prepared in an inclusion complex with β-cyclodextrin in a ratio of 1:4 showed a percentage inhibition of 54.5% at 20 min [31]. The nimesulide-tris complex showed a superior reduction in the number of writhes compared to the nimesulide-polyvinylpyrrolidone (PVP) K30 and nimesulide-polyethylene glycol 4000 complexes [15]. Several reports have indicated that, in addition to improving the solubility of poorly soluble drugs, tris can act as a permeability enhancer and alter membrane permeability [32][33][34].
Indomethacin-Induced Ulcer
NSAIDs cause gastric toxicity, including gastric ulcers. Indomethacin, a commonly used NSAID, is often used as a model drug for inducing gastric ulcers in rats due to its high ulcerogenic index [16,35]. Indomethacin is a potent inhibitor of prostaglandin and can cause significant damage to the gastric mucosa [36]. This study aimed to determine if coprecipitated mixtures of ketoprofen and three basic excipients, which improved solubility and bioavailability, could reduce the gastrointestinal side effects of ketoprofen. Figure 8 shows stomachs pinned on corkboards to emphasize the location and number of ulcers in the negative and positive (indomethacin) groups, as well as the groups treated with ketoprofen and coprecipitated mixtures of ketoprofen and basic excipients. The indomethacin-treated group (the positive control) had the highest number of ulcers, with nine ulcers recorded. The number of ulcers in the ketoprofen group was reduced to about one-third of that of the indomethacin group, as indomethacin is more potent at causing gastric ulcers [36]. There were no statistically significant (p > 0.05) differences in the number of ulcers between the K:tris and ketoprofen groups. Figure 10a-e display histopathological documentation of gastric tissues for the control, ketoprofen, ketoprofen:tris coprecipitate, ketoprofen:lysine coprecipitate, and ketoprofen:arginine coprecipitate at low magnification (x100) and high magnification (x400) lenses. The normal control group exhibited intact mucosa (double-headed arrow), healthy surface epithelium (thin black arrow), intact normal gastric glands (white arrows), and normal submucosa (Figure 9a). At higher magnification, the normal control group showed healthy surface epithelium with normal integrity (thin black arrow) and intact normal gastric glands (white arrows) (Figure 10a). In contrast, the ketoprofentreated group exhibited gastric mucosa (double-headed white arrow) with sporadic superficial degermation and desquamation of the surface epithelium (red arrows). Additionally, degenerative changes and shrinkage of the gastric glands (thick black arrows) were observed (Figure 9b). Figure 10b shows superficial degermation and desquamation of the surface epithelium (red dotted arrows). Furthermore, degenerative changes and shrinkage of gastric glands (thick black arrows) were recorded for the ketoprofen-treated group (Figure 10b). Figure 9c1,2 shows gastric mucosa (double-headed white arrow) with ulcerated regions (red arrows) and slight degeneration of glands (thick black arrow) in the K:tristreated group. Ulcerated surface epithelium (red dotted arrows) with slight degeneration and atrophy of gastric glands (thick black arrow) was recorded for the same group and detected at higher magnification in Figure 10c1,2. Figure 9d shows normal, intact mucosa (double-headed arrow), maintained surface epithelial integrity (black arrows), and gastric glands (white arrows) for the K:lysine Interestingly, the K:lysine and K:arginine coprecipitated mixtures produced significantly fewer ulcers than ketoprofen alone. These results are consistent with recent reports [5]. Ketoprofen lysine salt has been shown to reduce ulcer side effects compared to the acidic form of ketoprofen [37]. This is likely due to the residual amino groups of L-lysine and L-arginine, which act as carbonyl scavengers; they also offer protection against oxidative damage to the gastric mucosa by providing indirect antioxidant effects and increasing the levels of glutathione S-transferase P at the cellular level in the gastric mucosa [5]. Additionally, L-lysine and L-arginine have been reported to both enhance mucosal integrity and have gastroprotective effects through nitric oxide (NO) donation [37,38].
Histopathological Studies
Figures 9a-e and 10a-e display histopathological documentation of gastric tissues for the control, ketoprofen, ketoprofen:tris coprecipitate, ketoprofen:lysine coprecipitate, and ketoprofen:arginine coprecipitate at low magnification (x100) and high magnification (x400) lenses. The normal control group exhibited intact mucosa (double-headed arrow), healthy surface epithelium (thin black arrow), intact normal gastric glands (white arrows), and normal submucosa (Figure 9a). At higher magnification, the normal control group showed healthy surface epithelium with normal integrity (thin black arrow) and intact normal gastric glands (white arrows) (Figure 10a). In contrast, the ketoprofen-treated group exhibited gastric mucosa (double-headed white arrow) with sporadic superficial degermation and desquamation of the surface epithelium (red arrows). Additionally, degenerative changes and shrinkage of the gastric glands (thick black arrows) were observed (Figure 9b). Figure 10b shows superficial degermation and desquamation of the surface epithelium (red dotted arrows). Furthermore, degenerative changes and shrinkage of gastric glands (thick black arrows) were recorded for the ketoprofen-treated group (Figure 10b).
Conclusions
This study highlighted the role of three basic excipients (tris, L-lysine, and L-arginine) as potential solubilizers, as well as their capacity to form salts with the non-steroidal anti-inflammatory drug ketoprofen. The three basic excipients were superior in potentiating and advancing analgesic activities due to both their penetration-enhancing activities and their enhanced solubility and dissolution rates of the weak acid drug. However, only L-arginine and L-lysine demonstrated gastric protection against ketoprofen-induced ulcers and erosion of the gastric mucosa. This study recommends L-arginine and L-lysine as promising agents for promoting the analgesic and safety profiles of classical NSAIDs. 9c-1,-2 shows gastric mucosa (double-headed white arrow) with ulcerated regions (red arrows) and slight degeneration of glands (thick black arrow) in the K:tristreated group. Ulcerated surface epithelium (red dotted arrows) with slight degeneration and atrophy of gastric glands (thick black arrow) was recorded for the same group and detected at higher magnification in Figure 10c-1,-2. Figure 9d shows normal, intact mucosa (double-headed arrow), maintained surface epithelial integrity (black arrows), and gastric glands (white arrows) for the K:lysine group. Maintained surface epithelial integrity (black arrows) and intact gastric glands (white arrows) were recorded at higher magnification (Figure 10d). Figure 9e shows intact, healthy mucosa and possible protection against ketoprofeninduced superficial ulceration (black arrows) for the K:arginine group. In Figure 10e, intact, healthy mucosal surfaces (black arrows) and normal gastric glands (white arrows) were recorded at higher magnification.
These findings correlate significantly with the ulcer indices shown in Table 3, and, in addition to their enhanced solubilization for ketoprofen, confirm the gastroprotective effect and safety benefits of the two basic amino acids L-lysine and L-arginine. Table 3. Ulcerogenic potential (number of ulcers) and ulcer indices for the positive control (indomethacin), ketoprofen (K), K:lysine, K:arginine:K:tris coprecipitated mixtures.
Test Substance Ulcer Number Ulcer Index
Control 0 ± 0.0 0 Indomethacin 8.66 ± 0.88 a 7.8 a The data are presented as the mean ± SD of six animals. A one-way ANOVA test followed by a Tukey-Kramer post hoc test was used for multiple comparisons. a Denotes a significant difference from the control group (p < 0.05). b Represents a significant difference from the indomethacin group (p < 0.05). c Indicates a significant difference from the ketoprofen group (p < 0.05).
Conclusions
This study highlighted the role of three basic excipients (tris, L-lysine, and L-arginine) as potential solubilizers, as well as their capacity to form salts with the non-steroidal antiinflammatory drug ketoprofen. The three basic excipients were superior in potentiating and advancing analgesic activities due to both their penetration-enhancing activities and their enhanced solubility and dissolution rates of the weak acid drug. However, only L-arginine and L-lysine demonstrated gastric protection against ketoprofen-induced ulcers and erosion of the gastric mucosa. This study recommends L-arginine and L-lysine as promising agents for promoting the analgesic and safety profiles of classical NSAIDs.
Author Contributions: H.A.A.-T., methodology, formal analysis, data curation, and initial draft preparation; M.E.S., methodology, data curation, review, and editing; T.S.M., methodology, data curation, review, and editing; J.A.A.-A., methodology, formal analysis, and writing; H.A., conceptualization, methodology, data curation, review, and editing. All authors have read and agreed to the published version of the manuscript.
|
v3-fos-license
|
2018-05-23T14:22:00.000Z
|
2017-10-06T00:00:00.000
|
119386083
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.97.094024",
"pdf_hash": "312c048ab4f31fcd94e38e85698ce5b15ec25141",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46502",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "312c048ab4f31fcd94e38e85698ce5b15ec25141",
"year": 2017
}
|
pes2o/s2orc
|
Threshold Factorization Redux
We reanalyze the factorization theorems for Drell-Yan process and for deep inelastic scattering near threshold, as constructed in the framework of the soft-collinear effective theory (SCET), from a new, consistent perspective. In order to formulate the factorization near threshold in SCET, we should include an additional degree of freedom with small energy, collinear to the beam direction. The corresponding collinear-soft mode is included to describe the parton distribution function (PDF) near threshold. The soft function is modified by subtracting the contribution of the collinear-soft modes in order to avoid double counting on the overlap region. As a result, the proper soft function becomes infrared finite, and all the factorized parts are free of rapidity divergence. Furthermore, the separation of the relevant scales in each factorized part becomes manifest. We apply the same idea to the dihadron production in $e^+ e^-$ annihilation near threshold, and show that the resultant soft function is also free of infrared and rapidity divergences.
I. INTRODUCTION
Factorization theorems in which high-energy processes are divided into hard, collinear, and soft parts are essential in providing precise theoretical predictions. Though it is difficult to probe the threshold region experimentally, it is theoretically both interesting and tantalizing to exploit the factorization near threshold in Drell-Yan (DY) process and in deep inelastic scattering (DIS). A prominent distinction near threshold is that there exists nonvanishing soft interaction since the real contribution does not cancel the virtual contribution completely due to the kinematic constraint near threshold [1,2]. This distinctive feature near threshold was also discussed in Refs. [3][4][5][6] in the framework of the soft-collinear effective theory (SCET) [7][8][9][10].
It is well established in full quantum chromodynamics (QCD) that the structure functions F DY for DY process and F 1 for DIS can be schematically written in a factorized form as [1,2] where the kinematical variables τ = q 2 /s = Q 2 /s, z = τ /(x 1 x 2 ), and x = −q 2 /2P · q = Q 2 /2P · q are all close to 1. Here q µ is the hard momentum carried by a photon, s is the momentum squared of incoming hadrons in DY, and P µ is the momentum of an incoming hadron. The hard functions H DY (Q) and H DIS (Q) describe the physics at the large scale Q. The PDF f i/N (x, µ) is the collinear part from the incoming hadrons, which represents the probability that a specific parton of type i in a hadron N has a longitudinal momentum fraction x. The jet function J(Q √ 1 − x, µ) describes the energetic collinear particles in the final state in DIS. Finally the soft functions S DY (Q(1 − z), µ) and S DIS (Q(1 − x), µ) describe the emission of soft particles. (Actually S DIS (Q(1 − x), µ) = 1 to all orders in α s , as will be discussed later.) And '⊗' implies an appropriate convolution.
If we consider the factorization in SCET, the surmised form of the factorized structure functions in full QCD near threshold in Eqs. (1) and (2) looks reasonable at first glance, but there are delicate and discomfiting aspects. In fulll QCD, the PDF has the collinear divergences yielding the DGLAP equation through special definitions of the PDF or with other techniques [1]. And the soft part with the eikonal cross sections is IR finite. In SCET, if we naively separate the collinear, soft modes, there appear IR and rapidity divergences in each factorized part. In this paper, we address this problem by considering additional modes required in SCET near threshold.
The issues on the factorization near threshold in SCET are summarized as follows: First, since the incoming active partons take almost all the hadron momenta, the emission of additional collinear particles is prohibited. It means that only the virtual correction contributes to the collinear part [11,12]. Therefore, if we consider the collinear interaction alone, the PDF near threshold should be definitely different from the PDF away from threshold, where the latter includes the effect of real gluon emissions.
Second, even though we accept Eqs. (1) and (2) and compute the factorized parts in SCET perturbatively, we encounter infrared (IR) divergences not only in the PDFs but also in the soft functions. The IR divergence in the PDF can be safely absorbed in the nonperturbative part, but the IR divergence in the soft function is a serious problem since it destroys the factorization and prevents a legitimate resummation of large logarithms of 1 − x or 1 − z.
The existence of the IR divergence has been casually disregarded in the belief that the final physical result should be free of it. But it was pointed out in Ref. [13,14] that the soft functions indeed contain IR divergence by carefully separating the IR and ultraviolet (UV) divergences. In Ref. [13,14], we have suggested how some of the divergences can be transferred from the soft part to the collinear part to make the soft function IR finite, while the collinear part reproduces the PDF near threshold. Basically this amounts to reshuffling divergences based on physics near threshold, but it was difficult to explain how the scale dependence in each part can be established consistently to resum large logarithms. For example, the typical scale for the PDF is µ ∼ Λ QCD or larger, and the relevant scale to the soft function is Third, as we will see later, the soft parts in Eqs. (1) and (2) include the rapidity divergence. The rapidity divergence arises when the product of the lightcone momenta remains constant, while each component goes to zero or infinity [15,16]. Though the rapidity divergence exists in each factorized part, the scattering cross section, which is a convolution of the factorized parts, is free of the rapidity divergence. However, the cancellation of the rapidity divergence occurs only when the invariant masses of the different modes are of the same order. Near threshold, the invariant masses of the collinear particles and the soft particles are different and there is no reason for the rapidity divergence to cancel in the sum of the collinear and the soft parts.
The naive extension of the factorized form in Eqs. (1) and (2) to SCET contains all these problems. And the questions are how we can obtain a consistent factorization formula near threshold, and how it can be connected to the factorization away from threshold. The predicament can be resolved by noticing that, near threshold, there is an additional degree of freedom with small energy, collinear to the beam direction. The small energy scale is given by ω = Q(1 − z) in DY process or ω = Q(1 − x) in DIS with z ∼ x ∼ 1, and lies between the large scale Q and the low scale Λ QCD . The main points of our paper are to identify the new degrees of freedom, to incorporate them in the definition of the PDF and the soft functions, to calculate the perturbative corrections at order α s , and to show that we obtain the proper factorization near threshold.
The high-energy processes including the threshold region can be efficiently described by SCET. In SCET, the n-collinear momentum scales as (n · p, n · p, p µ ⊥ ) ∼ Q(1, λ 2 , λ), where Q is the large energy scale and λ is the small parameter for power counting in SCET. The n-collinear momentum scales as Q(λ 2 , 1, λ). Here n µ and n µ denote the lightcone vectors satisfying n 2 = 0, n 2 = 0 and n · n = 2. In order to describe the threshold region, from the n-and n-collinear interactions we decouple the n-collinear-soft (csoft) and the n-csoft modes respectively, which scale as p µ n,cs ∼ ω(1, α 2 , α), p μ n,cs ∼ ω(α 2 , 1, α).
Here the power-counting parameter α for the csoft modes satisfies the relation ωα ∼ Qλ, such that the collinear and csoft particles have p 2 ∼ ω 2 α 2 ∼ Q 2 λ 2 . The additional partition of the collinear modes depends on the new scale introduced near threshold. The csoft modes are soft since the overall scale is governed by the small scale ω, but the momentum components scale like collinear momenta. The nomenclature for the collinear-soft modes varies, as in the collinear-soft (csoft) modes [17,18], the coft modes [19], and the soft-collinear modes [20], referring to the modes with similar momentum scaling in different situations. Here we will simply call these modes the csoft modes. The important feature near threshold is that the incoming active parton cannot emit real collinear particles, but the particles in the csoft modes can be emitted. And we define the PDF near threshold including the csoft modes. The new definition of the PDF covers the threshold region as well as the region away from threshold since the effect of the csoft modes away from threshold is cancelled to all orders, while it correctly describes the PDF near threshold. To avoid double counting on the overlap region, the contribution of the csoft modes should be subtracted from the soft part to obtain the soft function. Here the soft modes near threshold scale as p s ∼ ω(1, 1, 1), hence the csoft mode can be also considered to be a subset of the soft mode.
The effect of the csoft modes in the collinear and soft modes is more interesting when we consider the details of the higher order calculations. Without the csoft modes, both the soft part and the PDF contain the IR and rapidity divergences. But when the csoft modes are subtracted from the soft part, the resultant soft functions are free of the IR and rapidity divergences, and the PDF is free of rapidity divergence when the contribution of the csoft modes is included.
The structure of the paper is as follows: In Sec. II, the main idea of incorporating the csoft modes is presented in SCET. The PDF and the soft functions are defined near threshold in DY and in DIS processes. In Sec. III, the soft functions and the PDF are computed at order α s with the csoft modes. In Sec. IV, we consider the factorization of the dihadron production in e + e − annihilation near threshold, in which the effect of the csoft modes is included in the fragmentation functions. Finally we conclude in Sec. V.
A. Extension of the PDF to threshold
The main issue in constructing the PDF near threshold is how to implement the tight kinematic constraint, and how to relate it to the PDF away from threshold. Near threshold, the incoming partons cannot emit real collinear particles, hence only the virtual corrections contribute. On the other hand, the emission of the csoft modes is allowed. Therefore we start from defining the PDF near threshold by subdividing the collinear field into the collinear and the csoft modes. In Ref. [21], the decomposition of the collinear and the csoft modes has been performed in order to describe a fragmenting process to a jet with a large momentum fraction z. It can be adopted in defining the PDF near threshold.
We first decouple the soft mode ∼ Qζ(1, 1, 1) near threshold from the collinear mode, where ζ is a small parameter to characterize 1 − z or 1 − x. This is obtained by redefining the collinear fields in terms of the soft Wilson line [9] as where the soft Wilson line Y n is given by Then we extract the csoft mode ∼ Qζ(1, α 2 , α) in the collinear sector. That is, the collinear gluon A µ n is further decomposed into A µ n → A µ n +A µ n,cs . The resultant collinear mode scales with the large energy Q, while the csoft mode scales with Qζ. After the decomposition, the covariant derivative can be written as iD µ = iD µ c + iD µ cs = P µ + gA µ n + i∂ µ + gA µ cs . Here P µ (i∂ µ ) is the operator extracting the collinear (csoft) momentum, which applies only to the collinear (csoft) operator.
The collinear quark distribution function, which is the PDF away from threshold, is defined as where P + ≡ n · P is the operator extracting the largest momentum component from the collinear field. The average over spin and color is included in the matrix element. The combination χ n = W † n ξ n is the collinear gauge-invariant block with the collinear Wilson line W n and the collinear quark field ξ n .
For the proper treatment of the PDF near threshold, we need to include the csoft mode, which describes the emission along the beam direction. It is implemented by replacing n·iD c with n · iD = n · iD c + n · iD cs . Then we define the PDF as which covers all the regions, near and away from threshold. Note that the expression for φ q/N is invariant under the collinear and csoft gauge transformations. In order to show the gauge invariance order by order in power counting in a manifest way [22], we redefine the collinear gluon as where the collinear Wilson lineŴ n is expressed in terms of the newly defined collinear gluon field µ . Then the covariant derivative can be written as where the hats in W n and A n are removed for simplicity. Then the delta function in Eq. (7) is written as We can decouple the csoft interaction from the collinear part by redefining the collinear field as ξ n → Y n,cs ξ n , A µ n → Y n,cs A µ n Y † n,cs , which is similar to the decoupling of the soft interaction in Eq. (4). Here the csoft Wilson line Y n,cs is defined as and Yn ,cs is obtained by switching n ↔ n. Using the relation n · iD cs = Yn ,cs n · i∂Y † n,cs , the final expression for the PDF is given by Note that this new definition of the PDF is also valid in the region away from threshold. In this case the term i∂ + in the delta function is much smaller than xP + − P + , and it can be safely neglected. Then, due to the unitarity of the csoft Wilson line, the effect of the csoft modes cancels to all orders. Hence we can recover Eq. (6) and describe the PDF away from threshold with the collinear interactions only. Near threshold, we can put the label momentum in Eq. (13) as P + = P + , and obtain the PDF as The same result near threshold has been also derived in SCET + approach [23]. Therefore the expression in Eq. (13) can be regarded as the definition of the PDF over all kinematic regions. Using the expression in Eq. (14), we can calculate the PDF near threshold at the parton level, i.e., φ q/q . As we will see later, the calculation exactly reproduces the standard PDF in the limit x → 1 and it satisfies the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution. From Eq. (14), the fluctuation of the csoft mode is estimated as p 2 cs ∼ Λ 2 QCD . Therefore this mode scales as p µ cs ∼ Qζ(1, α 2 , α) with α = Λ QCD /(Qζ). When we discuss the decomposition of the collinear and the csoft modes below Eq. (5), the csoft mode can be regarded as a subset of the collinear mode. Hence the collinear mode would scale as Q(1, λ 2 , λ ) with λ ∼ ζ 1/2 α. And the offshellness is given by p 2 c ∼ Λ 2 QCD /ζ, which is much larger than the typical hadronic scale squared. However, near threshold this collinear mode contributes to the PDF only through the virtual corrections without any specific scale. So there is no impact on integrating out the mode ∼ Q(1, λ 2 , λ ) and we can scale it down to p c ∼ Q(1, λ 2 , λ) with λ ∼ Λ QCD /Q. Then the collinear mode at the lower scale has the offshellness p 2 c ∼ Λ 2 QCD .
B. Prescription of the factorization near threshold
With the new definition of the PDF, the naive factorization formulae which are schematically given in Eqs. (1) and (2) can be cast into an appropriate form near threshold. In order to construct correct factorization theorems near threshold, we take the following steps: • After integrating out the hard interactions, we construct the naive factorization formalism by decomposing the collinear and soft interactions.
• Next we decouple the csoft mode from the collinear mode, and express the PDF in terms of the collinear and the csoft fields. 1 • We define the soft functions by subtracting the csoft contributions to avoid double counting.
The derivation of the naive factorization for the structure functions in SCET is not repeated here. Instead we refer to Ref. [14], where the details of the derivation are presented and the naively factorized results are shown in Eqs. (2.13) and (2.33) for DY and DIS respectively.
The naive factorization for the structure function in DY process is written as where N c is the number of colors, J µ is the electromagnetic current, and Q 2 = q 2 is the invariant mass squared of the lepton pair. τ = Q 2 /s and z = Q 2 /ŝ, whereŝ is the centerof-mass energy squared for the incoming partons. The cross section is given by is the Born cross section for the quark flavor f with the electric charge Q f . The naive soft functionS DY is defined as We call this naive soft function since the subtraction of the csoft modes is not included yet. The naive factorization for the structure function F 1 in the Breit frame is given by where P µ = P + n µ /2 is the momentum of the hadron N along the beam and the final-state jet function in the n direction is defined as [9] Xn χn|Xn Xn|χn = / n 2 Here the naive soft functionS DIS is given as Based on the naive factorization in Eq. (15) and (17), we construct the proper factorization theorem as follows: As shown in section. II A, we decompose the collinear and the csoft modes in the collinear sector and express the PDF in Eq. (14). This procedure can be achieved by replacing the collinear PDF f i=q,q/N with the appropriate PDF φ i/N . Then we note that the csoft modes scaling as p n,cs ∼ Qζ(1, α 2 , α) and pn ,cs ∼ Qζ(α 2 , 1, α) are also the subsets of the soft mode ∼ Qζ(1, 1, 1), Therefore, in order to avoid double counting for the overlap region, the csoft contribution should be subtracted from the naive soft function.
Finally the correct factorization theorem for DY and DIS processes are given as Note that there is no correlation between φ q/N 1 and φq /N 2 since the n-collinear(-csoft) and ncollinear(-csoft) fields do not interact with each other at leading order in the power counting of the collinear(-csoft) limit. 2 Here S DY is the soft function for DY after subtracting the csoft contribution from the naive soft function. The subtracting procedure follows the basic idea of the zero-bin subtraction [28], which can be legitimately applied to the csoft sector [29]. As a result the soft function contains neither IR nor rapidity divergence contrary to the naive soft function. In sec. III we will see the details of the computation for the soft function at order α s . When we compare Eq. (21) with Eq. (17), we see thatS DIS is not present in the final factorization theorem since the csoft contribution cancelsS DIS to all orders in α s . At tree level, both S DY in Eq. (20) and the naive soft function are normalized to δ(1 − z). At order α s , S DY is given as S where the superscript denotes the order in α s . And S cs and S cs are the n-csoft and n-csoft contributions respectively, which are given as Here Y n,cs and Y n,cs are the csoft Wilson lines in terms of the n-csoft and n-csoft gluons respectively. Note that 2i∂ 0 in the argument of the naive soft function in Eq. (16) is replaced by n · i∂ (n · i∂) in S cs (S cs ) according to the power counting. In order to specify the soft region completely in Eq. (22), we may introduce and add the contribution of the so-called 'soft-soft (ssoft)' mode scaling as p ss ∼ Qζ(α 2 , α 2 , α 2 ). But the ssoft mode does not contribute to Eq. (22). In general we can divide the full soft region into the 'hard-soft (hsoft)', the csoft, and the ssoft regions. This division and the partition of the full soft region are similar to the procedure in constructing the hard, collinear, and soft modes in SCET from QCD. The only difference is that the large energy Q in the full theory is replaced with Qζ here. Therefore, we can systematically factorize the full soft modes into the hsoft, csoft and ssoft degrees of freedom. In this respect, S DY in Eq. (22) can be regarded as the one-loop correction to the hsoft function obtained from the matching between the full soft and the csoft contributions. Since the csoft modes can reproduce the low energy behavior of the full soft function as the collinear modes do from the full theory, we argue that the hsoft function remains IR finite at higher orders.
For the soft function in DIS, the nonzero csoft contribution only comes from S cs in Eq. (23). And it is the same as the naive soft function, as well as the csoft contribution to the PDF in Eq. (14). At order α s , S
III. THE SOFT FUNCTIONS AND THE PDF NEAR THRESHOLD
We compute the soft functions and the PDF explicitly at one loop in order to verify the statements in the previous section. We regulate the UV divergence using the dimensional regularization with D = 4 − 2 and the MS scheme. We introduce the fictitious gluon mass m g to regulate the IR divergence. We also consider the rapidity divergence [15,16], which appear as the loop momentum k + or k − goes to infinity while k + k − remains finite. We employ the Wilson lines as [16] W n = perms.
exp − g n · P and the rapidity divergence appears as poles in η. The Wilson lines in the n direction can be obtained by switching n and n. For the csoft mode, we use the form of the soft Wilson line, but the soft field is replaced by the csoft field.
A. The soft functions near threshold
We first consider the soft function for DY process near threshold. The naive soft function is defined in Eq. (16), and the correct soft function can be obtained through the csoft subtractions, given by Eq. (22). In order to see the scale dependence clearly, we introduce the dimensionful soft function, which is given as where ω = Q(1 − z). The soft virtual contribution at one loop is given as = α s C F π δ(ω) where k + = n · k and k − = n · k. The real gluon emission at order α s is given as Here Θ is the step function, and we put = η = 0 since the integral has neither the UV nor the rapidity divergence. The final result is expressed in terms of the Λ-distribution. It is defined as where f is a smooth function at ω = 0. In defining the Λ-distribution, Λ is an arbitrary scale larger than ω, but it suffices that Λ is slightly larger than ω.
In obtaining the final result in Eq. (28) with the Λ-distribution, we write M R S,DY as Note that this expression is independent of Λ. But the scale Λ is chosen such that the integration over the delta function δ(ω − k + − k − ) should yield a nonzero value. That means Λ can be any value larger than ω, but from physics Λ is slightly larger than ω, but of the same order, i.e., Λ ∼ Q(1 − z). Fig. 1 shows the phase space for the real gluon emission, where the shaded green region denotes the integration region for the part proportional to δ(ω) in Eq. (28) or (30). The dashed line represents the constraint for nonzero ω. Combining Eqs. (27) and (28), we obtain the naive DY soft function at order α s as We can clearly see that the naive one-loop result contains the IR divergence as the logarithm of m g . This is due to the incomplete cancellation of the virtual and real corrections. If the phase space for the real gluon emission spanned all the region with no constraint, the virtual and the real corrections would cancel, M R S + M V S = 0. In this case, the soft function would become zero at order α s , which holds true to all orders due to the fact that Y † n Y n = Y † n Yn = 1. However, as can be seen in Fig. 1, the phase space for the real gluon emission does not cover all the IR region (near the red line) available to the virtual corrections near threshold. Therefore the incomplete cancellation yields IR divergence in the naive DY soft function. The existence of the IR divergence, which was pointed out in Ref. [13,14], could invalidate the factorization near threshold. Furthermore, as shown in Fig. 1, there is no rapidity divergence in the real gluon emission since the phase space responsible for the rapidity divergence is not included in the phase space for the real gluon emission. The rapidity divergence in the naive soft function comes solely from the virtual contribution.
These problems posed by the naive soft contribution can be resolved by introducing the csoft modes. The csoft contribution is included in the definition of the PDF, but the csoft momentum is also a subset of the soft momentum. In order to avoid double counting, the csoft contribution is subtracted from the naive soft function to define the true soft function near threshold, given by Eq. (22). The subtraction removes both the IR and the rapidity divergences in the soft function.
Let us first consider the contribution of the n-csoft mode at order α s . In DY process, the contribution of the n-csoft mode is the same as the n-csoft case due to the symmetry under n ↔ n. We calculate the dimensionful csoft function S cs (ω) = S cs /Q at order α s . Here ω = Q(1 − z) and the dimensionless csoft function S cs is defined in Eq. (23). The virtual contribution of the csoft mode is the same as that of the soft mode M V cs = M V S , which is presented in Eq. (27).
The real gluon emission at order α s is written as where the result is written by invoking the Λ-distribution, in which the IR divergence as ω → 0 is extracted in the term proportional to δ(ω). In Fig. 2 we show the structure of the phase space for the real emission both in the k + -k − and in the k + -k 2 L planes, where k 2 L ≡ k + k − . The shaded green region is the integration region for the part proportional to δ(ω) and the dashed line is the phase space for M R cs (ω = 0). When the real and the virtual contributions are combined, the contribution of the part proportional to δ(ω) comes from the region B in the k + -k 2 L plane of Fig. 2 since the virtual contribution covers whole region above the line k + k − = m 2 g with the minus sign relative to the real contribution. Therefore the csoft contribution at order α s can be written as Here I B is the integral over the region B in Fig. 2, and it is given as Here the original rapidity regulator |k + − k − | −η is replaced with k −η + . In the region B of Fig. 2, k − in the regulator can be safely ignored since the rapidity divergence occurs only when k + goes to infinity and k − goes to zero while k 2 L = k + k − remains finite. Actually keeping k − in the regulator has no effect on the calculation of the region B in the limit η → 0. Therefore there is no difference between the original regulator and k −η + as far as we integrate over the region B. In the k + -k 2 L plane, I B includes the UV (IR) divergence as k 2 FIG. 2. The structure of the phase space for the real n-csoft gluon emission. The green region is the integration region for the part proportional to δ(ω) with the Λ-distribution. The dotted line k + = ω is the constraint from the delta function for nonzero ω. In the k + -k 2 L plane, the integration region looks simple, and the regions where UV and the rapidity divergences arise are shown.
goes to infinity (m 2 g ). The computation of M R cs (ω = 0) is straightforward. Finally the bare csoft contribution at order α s is given as The n-csoft contribution S cs is the same as S cs due to the symmetry under n ↔ n. We finally obtain the proper soft function for DY process at order α s as Note that this soft function contains only the UV divergence with neither rapidity nor IR divergences. This function is governed by a single scale ω ∼ Λ ∼ Q(1 − z), hence the scale µ to minimize the large logarithms is given by Q (1 − z).
The dimensionful soft function in Eq. (36) can be easily converted to the dimensionless soft function. From the definition of the Λ-distribution in Eq. (29), we have the following relations to the standard plus distribution: Then the renormalized dimensionless soft function to next-to-leading order (NLO) in α s is given as which is the same as W DY in Ref. [13,14]. We now consider the soft function in DIS. The naive soft function has been defined in Eq. (19). Note that the naive soft function is the same as the n-csoft function in Eq. (23) except that the Wilson lines involved are the soft fields for the naive soft function, and the csoft fields for the csoft function. The soft and n-csoft momenta scale as p µ s ∼ Qζ(1, 1, 1) and p µ cs ∼ Qζ(1, α 2 , α). However, since both functions involve the same scale Q(1 − z) ∼ Qζ in the delta functions, they are the same to all orders in α s . 3 Unlike DY process, there is no n-csoft contribution in DIS. When we consider n-csoft contribution from Eq. (19), the delta function becomes δ(1 − z) since Q(1 − z) is much larger than n · p cs in power counting. The n-csoft real emission is the same as the virtual n-csoft contribution but with the opposite sign. Therefore the n-csoft contribution cancels at one loop, and to all orders in α s . As a result, the proper soft function in DIS remains as δ(1 − z) to all orders in α s .
B. The PDF at NLO near threshold
From the definition of the PDF near threshold in Eq. (14), we compute the correction at order α s . If we take the perturbative limit, the PDF at the parton level can be additionally factorized as [5,11,23,30] where Q = p + is the large quark momentum, C q is the collinear part, and S sc is the csoft function defined in Eq. (23). At one loop, the virtual contribution for C q is given as The wave function renormalization and the residue are given by And we obtain the collinear part to NLO as It contains the rapidity, the UV and the IR divergences. From Eqs. (35) and (37), we also obtain the dimensionless csoft function to NLO as Therefore the bare PDF to order α s is given as The PDF near threshold is free of rapidity divergence, and it is the same as the PDF away from threshold when we take the limit x → 1. Obviously, the renormalization-group behavior satisfies the DGLAP evolution in the limit x → 1, and we do not repeat solving the renormalization group equations and refer to Refs. [5,6].
IV. DIHADRON PRODUCTION IN e + e − ANNIHILATION NEAR THRESHOLD
We consider the dihadron production in e + e − annihilation near threshold: e − e + → h 1 + h 2 + X, where X denotes soft particles in the final state. Here the final hadrons h 1 and h 2 in the n and n direction take almost all the energies of the mother partons p + h1 = x 1 Q, where Q is the center-of-mass energy and x 1,2 are close to 1. The scattering cross section is factorized into the two fragmentation functions and the soft function. If we naively compute the soft function without the csoft subtraction, here we also have the IR divergence as well as the rapidity divergence, invalidating the factorization theorem. Therefore we define the fragmentation functions in terms of the collinear and the csoft fields describing the radiations in the directions of the observed hadrons. Then we can properly subtract the csoft interactions from the naive soft function, and as a result the factorization theorem can be written as dσ dp + h1 dp − (45) Here σ 0 is the Born scattering cross section and the threshold region corresponds to x 1,2 → 1. The hard function H DH is given by the same as H DY in Eq. (20). S DH is the soft function for the dihadron production and it can be obtained after the csoft subtraction from the naive soft function.
Following the analysis of the fragmentation function to a jet (FFJ) in the large z limit [21], we define the hadron fragmentation function as where we set the transverse momentum of the hadron p ⊥ h1 as zero. Similarly, the fragmentation function from the antiquark D h 2 /q (z 2 , µ) can be obtained in terms of the n-collinear and n-csoft fields. As in the case of the PDF, this definition is valid near, and away from threshold. Near threshold, putting P † + = p + h 1 we can further simplify Eq. (46) as We can easily check that the result at order α s for Eq. (47) at the parton level is given by the same as the result on the PDF and its renormalization behavior follows the DGLAP evolution in the large z limit. For the soft function, we start with the naive soft function, which is defined as The csoft functions can be obtained by taking the n-and n-csoft limits onS DH , which are subtracted from the naive soft function. As a result the NLO correction to the soft function for dihadron production is given by where the csoft functions S cs and S cs have been defined in Eqs. (23) and (24) respectively. In order to see the scale dependence transparently, we consider the NLO calculation with the dimensionful soft function. It is defined as where ω + = Q(1 − z 1 ) and ω − = Q(1 − z 2 ). Then the naive dimensionful soft function is expressed asS The virtual one-loop contribution toS DH is given by M V S,DH = δ(ω + )δ(ω − )M V S , where the one loop resultM V S is given in Eq. (27). And the real contribution is given as Here we put = η = 0 since the integral has no UV, rapidity divergences.
M R S,DH in Eq. (51) is IR divergent as ω ± → 0. In order to extract the IR divergence, we employ the Λ-distribution defined in Eq. (29), where the upper values for ω ± are set as Λ ± . Then Eq. (51) can be rewritten as Combining the virtual and the real contributions, we obtain the naive soft function at order α s as This result contains the IR and the rapidity divergences as expected. From Eq. (49) the csoft contributions to be subtracted from the naive dimensionful soft function are given by δ(ω − )S (1) cs (ω + ) + δ(ω + )S (1) cs (ω − ). And using Eq. (35) we write the NLO csoft contributions as Finally, subtracting Eq. (54) from Eq. (53) we obtain the NLO result for the bare dimensionful soft function as We clearly see that the problematic IR and rapidity divergences are removed by the csoft subtraction as in DY process. Using Eq. (37), we can convert Eq. (55) to the dimensionless soft function. And the scales for the logarithms are determined by µ ∼ Λ ± . As a result, the renormalized soft function at NLO is given as In the framework of SCET, we have scrutinized the factorization theorems near threshold in DY, DIS processes and in the dihadron production in e + e − annihilation by introducing the csoft modes in SCET. The important point in analyzing these processes near threshold is that there appears a csoft mode governed by the scale ω = Q(1 − z) Q. Near threshold, real collinear particles for the PDF cannot be emitted due to the kinematical constraint, while the csoft modes can. The effect of the csoft modes can be implemented by decoupling the csoft modes from the collinear fields. The resultant PDF consists of the csoft Wilson lines along with the collinear fields. Note that the definitions of the PDF and the fragmentation function in Eqs. (13) and (46) with the csoft modes are valid not only near threshold but also away from threshold. Away from threshold, the effect of the csoft mode is cancelled, and the collinear mode alone describes the whole process.
The naive collinear contribution to the PDF contains the rapidity divergence, but it is cancelled when the contribution of the csoft modes is included. And the resultant PDFs are the PDF obtained from the full QCD. In Ref. [14], the same conclusion has been obtained by reshuffling suitable divergences in the collinear and the soft parts, but here the physics becomes more elaborate. It is also true for the fragmentation function that the definition including the csoft modes yields the fragmentation function in the full QCD, which can be extended to the threshold region.
The csoft mode is also a subset of the soft mode, hence the contribution of the csoft modes should be subtracted in the soft part to avoid double counting. And the soft functions after the subtraction are free of IR and rapidity divergences, and can be handled perturbatively. It is because the phase space for the soft modes responsible for the IR and rapidity divergences coincides with the phase space for the csoft modes. Therefore the IR and rapidity divergences are cancelled when the csoft modes are subtracted from the soft part.
The renormalization group evolution of the soft functions and the PDF can be obtained, for example, as in Refs. [5,6], so we will not repeat it here. But the important point is that the introduction of the csoft modes justifies the use of the renormalization group equation since we have explicitly verified that the higher-order correction to the soft function in DY process (it is zero in DIS) is indeed IR finite and free of rapidity divergences. So far, it has been assumed that the soft functions should be IR finite. But, in fact, the removal of the IR and rapidity divergences is sophisticated and it is accomplished by including the csoft modes. In this paper, we give a firm basis to the use of the renormalization group equation in resumming large logarithms near threshold.
In Refs. [11,12], the authors have considered the same processes, i.e., DY and DIS processes near threshold. However, their results are different from the ones presented here. The main difference is that the PDFs in Refs. [11,12] are formulated by combining the collinear and the soft mode. As a result there arises a correlation between the two collinear sectors in DY process. The prescription in Refs. [11,12] might hold only when Qζ ∼ ω is close to Λ QCD , where the csoft modes become identical to the soft mode. In this paper we set Qζ ∼ ω as a free small energy scale. As far as Qζ ∼ ω Λ QCD , with the help of the csoft modes, we can formulate the factorization theorem where no correlation arises between two collinear and soft sectors.
The final results presented here are the same as those in previous literature [13,14], so we may wonder what can be learned from this analysis. In spite of the same result, there are some illuminating points in our factorization procedure, which are worth commenting. First, the new scale Q(1 − z) is introduced, and it does not have to be related to Λ QCD . Previously it has been considered to accommodate the new scale with the power counting of λ ∼ Λ QCD /Q.
Second, the introduction of the csoft modes yields the soft function free of IR and rapidity divergences. Without the csoft modes, careful analysis shows that the real emission in the soft function contains IR divergence which is not cancelled by the virtual correction. Furthermore, there exists rapidity divergence. We have confirmed that the soft functions are indeed free of IR and rapidity divergences, and the PDF turns out to be the same as the PDF in full QCD.
Third, it becomes apparent which scale governs in each factorized parts. The naive soft function includes the contributions from different scales, but by separating the csoft modes, it now becomes apparent that each part receives the appropriate scale dependence. For the PDF, the scale in the logarithm is of order µ ∼ Λ QCD or larger and for the soft function, µ ∼ ω = Q(1 − z), or Q(1 − x). From the study of the dijet production [19,20], we now know that there should be additional degrees of freedom to account for the resummation with respect to the different scales in different factorized parts. And it is also true in DY and DIS processes, as well as the dihadron production, near threshold.
In conclusion, we have analyzed the factorization near threshold in SCET by including the csoft modes in defining the PDF, and subtracting the csoft contributions for the soft functions. The newly defined PDF can be properly extended to threshold and the resultant factorization theorem takes care of the problem of the IR and rapidity divergences in the soft function, which enables the resummation of the large logarithms through the renormalization group equation. Only after the inclusion of the csoft modes, the factorized result in SCET, Eqs. (20) and (21), is consistent with the result in full QCD.
|
v3-fos-license
|
2015-09-18T23:22:04.000Z
|
2013-11-01T00:00:00.000
|
13320025
|
{
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/13/11/15726/pdf",
"pdf_hash": "522b6c05a505f1f41fa3f04e45f54e74a3d04b4f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46503",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "522b6c05a505f1f41fa3f04e45f54e74a3d04b4f",
"year": 2013
}
|
pes2o/s2orc
|
A Doppler Transient Model Based on the Laplace Wavelet and Spectrum Correlation Assessment for Locomotive Bearing Fault Diagnosis
The condition of locomotive bearings, which are essential components in trains, is crucial to train safety. The Doppler effect significantly distorts acoustic signals during high movement speeds, substantially increasing the difficulty of monitoring locomotive bearings online. In this study, a new Doppler transient model based on the acoustic theory and the Laplace wavelet is presented for the identification of fault-related impact intervals embedded in acoustic signals. An envelope spectrum correlation assessment is conducted between the transient model and the real fault signal in the frequency domain to optimize the model parameters. The proposed method can identify the parameters used for simulated transients (periods in simulated transients) from acoustic signals. Thus, localized bearing faults can be detected successfully based on identified parameters, particularly period intervals. The performance of the proposed method is tested on a simulated signal suffering from the Doppler effect. Besides, the proposed method is used to analyze real acoustic signals of locomotive bearings with inner race and outer race faults, respectively. The results confirm that the periods between the transients, which represent locomotive bearing fault characteristics, can be detected successfully.
Introduction
Economic and social development in most countries has increased considerably the requirement for transportation capability. Railway transportation has played an important role in this development due to its strong transportation capability and high speeds. The continuous operation of trains is crucial in ensuring fluid and efficient traffic circulation. However, failure of train components can result in unexpected breakdowns, which can lead to serious traffic accidents. Hence, both the economy and human safety are at risk if trains have faulty components. Locomotive bearings support the entire weight of a train and they rotate at a high speed when the train is running. The health of these bearings is crucial for the continuous and safe operation of the train. Therefore, the development of an effective technique for monitoring locomotive bearings is profoundly significant [1].
A bearing usually consists of an inner race, an outer race, a cage, and a few rollers. Once one of these components suffers from a local defect, approximately periodic impacts will be generated when the defective surface comes into contact with the rollers [2]. These transient interaction components therefore contain important information about the health status of the bearing. Extracting these components is the most important task in bearing fault diagnosis based on signal processing [3].
The wayside acoustic defective bearing detector (ADBD) system [4] was developed in the 1980s to identify bearing defects before the bearings are overheated. All the devices in this system are set on the wayside, which makes the system more economical and feasible compared to an on-board monitoring system [5]. Through the ADBD system, the health status of locomotive bearings can be detected in passing vehicles. However, when the sound source is moving relative on the microphone, the Doppler effect will occur in the recorded signals. The signals obtained by the ADBD system will suffer from high frequency shift, frequency band expansion, and amplitude modulation [6], causing a significant decline in the performance of the system, particularly when the vehicles pass at high speeds.
Various methods have been developed for bearing fault diagnosis when no relative movement is observed between the bearing and the data acquisition system. Time-frequency analysis, which can extract information from both the time and the frequency domains, was developed for non-stationary signals. Several representative time-frequency distributions [7,8], such as the Wigner-Ville and the Choi-Williams distributions, have proven their potential in bearing fault signal processing [9]. Wavelet transforms were developed to decompose a temporal raw signal into different scales with varying frequency bandwidths [10,11]. Thus, wavelet transforms can be used to enhance bearing fault-related information for further processing [12,13]. The ensemble empirical mode decomposition (EEMD) is an adaptive decomposition method that can decompose nonlinear and non-stationary signals into a set of intrinsic mode functions (IMFs) according to its own natural oscillatory modes [14] and has been widely applied in diagnosing bearing faults [15,16].
Matching pursuit is an adaptive approach that selects optimal atoms to approximate a signal through iterations. It is effective for analyzing bearing fault transient signals [17]. Freudinger et al. [18] introduced a correlation filtering approach that uses vector inner products between a time history and a set of Laplace wavelets as a measure of the correlation between the data and a range of modal dynamics characterized by the wavelets. The Laplace wavelet parameters, through which the local maxima are derived, are regarded as the closest to the observed the model parameters of the system. Based on these fundamentals, Wang et al. [19,20] proposed a method that incorporates a transient model and parameter identification based on wavelets and correlation filtering to achieve bearing fault feature detection. However, high frequency shifts, frequency band expansions, and amplitude modulations occur in the wayside ADBD system due to the Doppler effect. The discussed techniques cannot be applied directly to this problem.
In this paper, a novel technique that combines a Doppler transient model and parameter identification based on the Laplace wavelet and a spectrum correlation assessment is proposed for real locomotive bearing fault detection. The Doppler transient model is constructed by considering the effect of Doppler distortion. Model parameters, including the transient periods, are identified by a correlation assessment between the envelope spectrum of the transient model and the real bearing fault signal. The results obtained through both simulations and real case studies demonstrate the remarkable performance of the technique in identifying locomotive bearing fault types.
The rest of this paper is organized as follows: Section 2 briefly describes the fundamental theory that underlies the Laplace wavelet and the correlation assessment. The proposed method is presented in Section 3, followed by the simulation analysis and the real case study in Section 4. Conclusions are presented in Section 5.
Transient Model Based on the Laplace Wavelet
During defective bearing movement, periodic impacts occur in the obtained signals. These transient components can be matched by using elements in a model dictionary. Five representative transient models are usually used to simulate transient components caused by bearing faults, the Morlet wavelet, Harmonic wavelet, Laplace wavelet, single-side Morlet wavelet, and single-side Harmonic wavelet. The Laplace wavelet is a single-sided damped exponential function formulated as the impulse response of a single mode system. It is similar to the waveform feature commonly encountered in bearing fault signal detection tasks [19]. A transient model based on the Laplace wavelet is therefore used for further analysis. The formula of the real part of the Laplace wavelet is given as: where W is the temporal range, f is the discrete frequency, ζ is the discrete damping coefficient, and τ is the discrete delay time. These parameters belongs to subset F, Z, and T d as shown below: A periodic multi-transient model based on the Laplace wavelet is constructed to simulate the waveform characteristics by introducing parameter T: Figure 1 illustrates the single and periodic Laplace wavelet transient models, respectively.
Correlation Analysis
In mathematics, the inner product serves as a powerful tool for evaluating the similarity of two time series. Suppose that two time series x(n) and y(n) have the same length N. Then, the inner product operation for the two finite length signals can be represented as [21]: The correlation coefficient , based on the inner product, can be used to assess the degree of correlation between the two time series. Its formula is given by: In terms of the Cauchy-Schwarz inequality, the correlation coefficient is constrained to: When the correlation coefficient is closer to 0, the linear dependence relationship between the two signals is weaker.
Proposed Doppler Transient Model Based on Laplace Wavelet and Spectrum Correlation Assessment
The conventional bearing fault detection methods have been developed for situations with no relative movement between the signal acquisition system and the defective bearing, and thus the acquired signal is not affected by the Doppler effect, however, locomotive bearing signals suffer from high frequency shifts, frequency band expansions, and amplitude modulations due to the Doppler effect. The fault-related impact intervals are not identical in this situation. Hence, the conventional detection methods are not applicable in the diagnosis of real locomotive bearing faults. In this study, a Doppler transient model based on the Laplace wavelet and a spectrum correlation assessment is proposed to address the inability of traditional methods to handle Doppler-distorted acoustic signals in real locomotive bearing fault detection. The correlation coefficient in the frequency domain does not need to consider the transient model's time delay in this method, reducing the computation time required for parameter identification and thus improving the computational efficiency. A flowchart of the proposed scheme is shown in Figure 2.
The proposed method follows the steps of transient model construction, Doppler distortion, parameter identification through the assessment of the envelope spectrum correlation, and bearing fault type identification through the recognized impact periods. Each step is discussed in detail in the following subsections.
Doppler Distortion of the Transient Model Based on the Laplace Wavelet
The Doppler effect was first proposed in 1842 by Austrian physicist Christian Doppler [22]. As shown in Figure 3, S is the distance between the initial position and the position when the sound source passes by the microphone. L is the current displacement. X is the distance between the current position and the position when the sound source passes by the microphone. R is the distance between the source point and the microphone. A time delay exists due to the distance between the sound source and the microphone. When the sound source has a movement speed V s relative to the receiver, the wave frequency changes for the receiver. The observed frequency is higher than the emission frequency during the source's approach, is identical at the instant when the source passes by, and is lower during the source's departure.
The Doppler effect makes traditional techniques unsuitable for processing locomotive bearing signals. To address this problem, the Doppler transient model is constructed for further analysis. The Doppler effect is embedded manually into the conventional transient model so that the constructed model is under the same distortion environment as the real locomotive bearing signal. According to acoustic theory, the following formula and procedures can be proposed: (1) Calculating the emission and reception time instants: The reception time instants where f s is the sampling frequency, N is the data length, and t 0 is the initial time instant. As shown in Figure 3, the relationship between the emission and reception time instants can be represented as: where V sw is the velocity of the sound waves in the medium, t e is the emission time instants, and r is the distance between the microphone and the line corresponding to the direction of the velocity of the sound source. L can be obtained by: (2) Interpolation: The periodic transient model χ(t) is interpolated in Equation (3) by using the emission time instants t e , which were calculated in Step 1 through a cubic spline. Let χ e (t e ) represent the interpolated amplitude vector. (3) Amplitude modulation: The amplitudes of the waveform are modulated during transmission from the moving sound source to the microphone. As introduced by Morse acoustic theory [23], it is assumed that the locomotive bearing moves with subsonic velocity (M = V s /V sw < 0.2), which indicates that the sound source is a monopole point source. Supposing that the medium has no viscosity, the received sound pressure can be expressed as: where q represents the total quality flow rate of the source point, q' is the derivative of q, t denotes the running time, represents the angle between the forward velocity of the sound source and the line from the sound source to the microphone, and M = V s /V sw is the Mach number of the source point's velocity. As shown in Equation (9), the received sound pressure comprises the near-field effect and the inverse relationship between the sound pressure and the distance between the source point and the microphone. When M < 0.2, the near-field effect can be neglected [24]. The received sound pressure is then given by: which can also be written as: is the amplitude modulation function and q'[t−(R/V sw )]/(4πr) is the received sound pressure when the microphone and the source point are both fixed. Therefore, the amplitude of the received waveform can be written as: The Doppler effect is thus embedded in the constructed transient model to ensure that the Doppler transient model experiences the same distortion as the real locomotive bearing signal.
Envelope Spectrum Correlation Assessment
To simulate the characteristics of the received waveform in the fault signal of the locomotive bearing, the parameters of the constructed Doppler transient model must be adjusted to match the actual periodic impacts in the locomotive bearing signal. A suitable criterion must be established to optimize the parameters from the subsets, as shown in Equation (2). A new strategy to assess the envelope spectrum correlation is proposed as a quantitative measure to determine the optimal parameters. This strategy comprises three procedures: (1) The Hilbert transforms of the periodic Doppler transient model and the real locomotive bearing fault signal are obtained [25]: where x A (t) is the real locomotive bearing fault signal, and the envelope signals are obtained by calculating the modulus of the analytic signals: (2) Frequency spectrum analysis is performed by: (3) The degree of correlation of the envelope spectrum is assessed by:
Parameter Identification and Locomotive Bearing Fault Detection
The number and the diameter of the rolling elements in the locomotive bearings are represented by Z and d, respectively. D m is the pitch diameter, α denotes the contact angle of the bearing, and f n denotes the rotational frequency. The ball pass frequency of the outer race (BPFO) can be obtained by: If the surface of the outer race suffers a defect, then every time the rolling element passes through the crack, periodic impulses will be created with interval t as: Similarly, the ball pass frequency in the inner race (BPFI) is given by: Therefore, the inner race fault characteristic frequency is equivalent to BPFI. The optimal parameters to obtain the local maximal envelope spectrum correlation coefficient are then identified, as discussed in Section 3.2. The identified impact period in the Doppler transient model is the related bearing fault impact interval. The fault type can be determined by referring to the calculated theoretical fault-related impact intervals.
Simulation Validation of the Proposed Method
The sampling frequency is 50,000 Hz and the impact interval embedded in the simulated signal is 0.016 s. The number of data points is 12,401. A randomly distributed noise n(t) is added to the simulated signal. The simulated and polluted signals are illustrated in Figure 4a,b, respectively. To simulate the actual Doppler distortion caused by the relative movement between the moving sound source and the receiver, Doppler distortion is added to the simulated signal according to the procedures specified in Section 3.1. The parameters in Figure 3 The proposed detection method is applied to the Doppler-distorted signal. The transient model is first constructed according to Equation (3). Its parameters require optimization from the sets T, F, and Z. The selection of these sets is crucial, as a larger interval range and a smaller parameter subset step will give a more accurate result. However, this will also result in excessive computational time and decrease the efficiency of the method. Hence, a balance between efficiency and accuracy should be guaranteed. The parameter subsets of F and T are uniform, as shown in Equations (1) and (2). The range of F is set at {800:10:1200}, which is drawn from the Fourier spectrum of the distorted signal. The subset of Z is non-uniform to provide higher resolution at lower damping ratio values, so that the efficiency of the method can be retained. Hence, the range of subset Z is selected as {{0.005:0.001: 0.03}{0.04:0.01:0.1}{0.2:0.1:0.9}} which have small steps in the low value range and large steps in the high value range. The impact interval of the transient model is searched from the set T, which is selected as {500/50,000:1/50,000:1,000/50,000}. The grid of the model parameters is constructed according to F and Z for each element from set T. When a group of parameters is determined, the transient model Doppler distortion is performed according to the procedures discussed in Section 3.1 to obtain the Doppler transient model. The envelope spectrum correlation between the Doppler transient model and the simulated Doppler distorted signal is assessed. Figure 6 shows the maximal correlation coefficients for the different elements from set T.
When the impact interval of the Doppler transient model is determined to be 800/50,000 = 0.016 s, the maximal correlation coefficient of the envelope spectrum between the Doppler transient model and the simulated Doppler distorted signal can be obtained. The optimal parameters f=900 and ζ =0.05 when the element 800/50,000 = 0.016 s is determined from set T are thus considered the best parameters for the Doppler transient model. The optimal Doppler transient model and the simulated Doppler distorted signal are shown in Figure 7. Thus, after parameter optimization, the optimal Doppler transient model's impact interval matches that of the simulated distorted signal. The impact interval of the original transient model is 800/50,000 = 0.016 s, as shown in Figure 7c. Therefore, the impact interval of the simulated Doppler distorted signal is determined successfully.
Application of the Proposed Method to Real Locomotive Bearing Fault Diagnosis
Real locomotive bearing fault signals suffering from the Doppler effect are analyzed to further validate the performance and applicability of the proposed method. Two sequential experiments are conducted indoors and outdoors to obtain a Doppler-distorted acoustic signal. In the first experiment, the acoustic signals of locomotive bearings with an inner race defect and an outer race defect are acquired through the microphone. The collected acoustic signals are embedded with the Doppler effect in the second experiment. The test rigs for these experiments are illustrated in Figure 8.
As shown in Figure 8a, the test rig is composed of a drive motor, two supporting pillow blocks (mounted with a healthy bearing), and a bearing [NJ(P)3226XI] for testing, which is loaded on the outer race through a worm-and-nut and an adjustable loading system installed in the radial direction. A 4944-A-type microphone from the B&K Company (Copenhagen, Denmark) is mounted adjacent to the outer race of the defective bearing to measure its acoustic signals. An advanced data acquisition system (DAS) by National Instruments (Austin, TX, USA) is used to perform data acquisition. The parameters of the test bearings are listed in Table 1. Some parameters used in the experiment are listed in Table 2. Figure 8b shows a realistic setup of the second experiment, which is represented by the model illustrated in Figure 3. The parameters are established as follows: S = 8 m, r = 2 m, V s = 30 m/s, and V sw = 340 m/s. The acoustic source is mounted in a moving vehicle, and the microphone and DAS from the first experiment were used. To simulate the locomotive bearing fault, an artificial crack with a width of 0.18 mm is made with a wire-electrode cutting machine on the surfaces of either the outer race and inner race, as shown in Figure 9. The Doppler-distorted inner race fault and outer race fault signals are obtained in these experiments. The proposed method is then used to detect the fault-related impact intervals. Figure 10 shows the Doppler-distorted outer race fault signal under the loading of 3 t and its spectrum. As computed by Equation (17), the outer race characteristic frequency is 138.74 Hz and the periodical impact interval is 0.0072 s. Figure 11 shows the maximal correlation coefficients for each selected impact period. The maximal correlation coefficient reaches its global maximum when the impact period is 0.0072 s, which is the real bearing fault-related impact interval. The optimal transient model and its Doppler-distorted model are shown in Figure 12a,b, respectively. A comparison between the optimal Doppler transient model and the real locomotive bearing fault signal in Figure 12c indicates that the proposed transient model correctly reveals the embedded fault-related impact intervals. (c) Figure 13 shows the maximal correlation coefficients for the different elements from set T. The maximal correlation coefficient is obtained when the impact period is 0.0067 s. However, this is not the real outer race fault-related impact interval. The values of the correlation coefficients are much smaller than those in Figure 11. Hence, the conventional method is not applicable to this problem. Figure 13. Maximal correlation coefficients for different elements from set T using the conventional method.
An outer race fault signal under a different loading, 1 t, is analyzed. Figure 14 shows the Doppler-distorted outer race fault signal under the loading of 1 t and its spectrum. This signal is processed according to the procedures in Figure 2 The conventional method in the time domain is again used for a comparative analysis. Figure 17 shows the maximal correlation coefficients between the transient model and the real bearing fault signal for the different elements from set T. The conventional method fails to identify the locomotive bearing fault-related impact interval, as the optimal impact interval found is 0.00628 s instead of 0.0072 s.
The actual inner race fault signal shown in Figure 18 is analyzed using the proposed method. Using Equation (19), the inner race fault characteristic impact interval is calculated as 0.0051 s. A transient model with optional parameters is established to recognize the locomotive bearing fault. The Doppler distortion is added into the constructed model. The maximal correlation coefficients for every selected impact period after parameter optimization are shown in Figure 19. The global maximal correlation coefficient is obtained when the impact period for the established transient model is set as 0.0051 s. A comparative analysis between the proposed method and the conventional method is also conducted on the inner race fault signal processing. Figure 21 presents the maximal correlation coefficients between the transient model and the real locomotive bearing fault signal in the time domain. The inner race fault-related impact interval is not successfully recognized, as the conventional method incorrectly selects the impact period T = 0.00638 s. The performance and superiority of the proposed method is therefore validated by these specific case studies and comparative analyses.
Figure 21.
Maximal correlation coefficients for different elements from set T, using the conventional method on the inner race fault signal.
Conclusions
In this study, a new Doppler transient model based on the Laplace wavelet and a spectrum correlation assessment is proposed for diagnosing locomotive bearing faults. The proposed scheme includes Laplace wavelet transient model construction, Doppler distortion, spectrum correlation assessment, and parameter optimization. After implementing the proposed method, the fault-related impact interval can be successfully determined using on the optimal Doppler transient model.
The Laplace wavelet is used as the impact base function due to its superior ability to match actual bearing fault impulses. A periodical transient model based on the Laplace wavelet is constructed. The parameters of the model require optimization to properly match the real locomotive bearing fault impact interval.
Through acoustical theoretical analysis, a procedure for adding the Doppler effect to the constructed periodical transient model is proposed to simulate the Doppler distortion experienced by real locomotive bearing fault signals.
A new criterion is established to choose proper parameters during Doppler transient model construction. Correlation analysis is conducted between the envelope spectrum of the established Doppler transient model and the locomotive bearing fault signal. The parameters for obtaining the maximal correlation coefficient are found to be the optimal parameters for the model. Hence, the impact interval in the optimal Doppler transient model is recognized as the fault-related impact interval.
The results obtained by investigating both simulated signals and locomotive bearing fault signals indicate that the proposed method exhibits satisfactory performance in analyzing Doppler-distorted locomotive bearing acoustical fault signals. The proposed method could be developed further for use in a wayside train condition monitoring system. 6 6.2 6.4 6.6 6.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2007-12-07T00:00:00.000
|
7898196
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "BRONZE",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1751-553X.2007.00996.x",
"pdf_hash": "1d564d15993a931d6a197153e9d3ed43cfe2c025",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46504",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1d564d15993a931d6a197153e9d3ed43cfe2c025",
"year": 2008
}
|
pes2o/s2orc
|
Performance evaluation and relevance of the CellaVision™ DM96 system in routine analysis and in patients with malignant hematological diseases
The CellaVision™ DM96 is an automated image analysis system dedicated to locating and preclassifying the various types of white blood cells in peripheral blood smears. The system also partially characterizes of the red blood cell morphology and is able to perform platelet counts. We routinely analyzed the blood samples from 440 patients with quantitative and/or qualitative abnormalities detected by the XE-2100 Sysmex™. Only 2.6% of cells are not identified by DM96™. After classification of the unidentified cells very good correlation coefficients are observed between DM96™ and manual microscopy for most hematological parameters and accuracy is judged excellent up to 98%. For most common parameters, false positive and false negative ratios are also very good. Whatever the pathology and the number of blasts on smear, all patients were positive for blast detection on DM96™. The system is a useful tool for assisting in the diagnosis and classification of most acute or chronic leukemia. Automatic cell location and preclassification, along with unique cell views on the computer screen, could reduce the time spent performing differentials and make real-time collaboration between colleagues a natural part of the classification process. The workstation also provides an ergonomically correct and relaxed working environment. We suggest its use in routine analysis; the system could be very helpful for the accurate morphological diagnosis of samples from patients with malignant hematological disease.
S U M M A R Y
The CellaVision TM DM96 is an automated image analysis system dedicated to locating and preclassifying the various types of white blood cells in peripheral blood smears. The system also partially characterizes of the red blood cell morphology and is able to perform platelet counts. We routinely analyzed the blood samples from 440 patients with quantitative and/or qualitative abnormalities detected by the XE-2100 Sysmex TM . Only 2.6% of cells are not identified by DM96 TM . After classification of the unidentified cells very good correlation coefficients are observed between DM96 TM and manual microscopy for most hematological parameters and accuracy is judged excellent up to 98%. For most common parameters, false positive and false negative ratios are also very good. Whatever the pathology and the number of blasts on smear, all patients were positive for blast detection on DM96 TM . The system is a useful tool for assisting in the diagnosis and classification of most acute or chronic leukemia. Automatic cell location and preclassification, along with unique cell views on the computer screen, could reduce the time spent performing differentials and make real-time collaboration between colleagues a natural part of the classification process. The workstation also provides an ergonomically correct and relaxed working environment. We suggest its use in routine analysis; the system could be very helpful for the accurate morphological diagnosis of samples from patients with malignant hematological disease. quality stained blood smear preparation for the accurate assessment of cellular morphology. Despite the significant improvements during the last years in hematology analyzers, no significant progress has been made in terms of automatic examination of peripheral blood cells. Irrespective of the analyzer, approximately 15% of the blood samples require manual microscopic observation either because of biological rules or analyzer flags. The relative number of samples to be reviewed will probably not decrease in years to come. Smear examinations are time consuming and require well-trained medical technologists and biologists.
Microscopy automation should be available in hematology laboratories. The decrease of cytology proficiency in the daily practice, the need for development of new innovative techniques in hematology laboratories in the face of limited human resources, and finally, the increase and the complexity of pathologies attributable to population aging create a need for automation of the cytology platform in all laboratories. In this context, we had the opportunity to evaluate the CellaVision TM DM96 automated microscope (CellaVision AB, Ideon Research Park, 70 Lund,Sweden) in the hematology laboratory at Caen University Hospital. The academic hospital has 1750 active beds and the laboratory performs 500-600 Complete Blood Count with Differential (CBC-DIFF) per day with XE-2100 Sysmex TM analyzers (Sysmex Corporation 1-S-1, Wakinohama Kaygandori, Cho-Ku, Kobe 651-0073, Japan). Adult and pediatric hematology account for 10% of the demands, oncology represents 15% and surgical and intensive cares about 20%. We evaluated CellaVision TM DM96 and discuss how such a device could be integrated into the daily routine and the performance of DM96 TM in the diagnosis and monitoring of patients with malignant hematological diseases.
M A T E R I A L S A N D M E T H O D S The automated microscope DM 96
CellaVision TM DM96 is an automated device for the differential counting of white blood cells (WBCs) and characterization of red blood cells (RBCs). It consists of a slide feeder unit, a microscope with three objectives (·10, ·50, and ·100), a camera and a computer system containing the acquisition and classification software CellaVision TM blood differential software ( Figure 1). A slide autoloader facilitates the automatic analysis of up to 96 smears with continuous loading access. The number of WBC to be analyzed is user definable from 100 up to 400. To perform a differential count, a thin film of blood is wedged on a glass slide (a blood smear) from a peripheral blood sample and stained according to the May-Grunwald Giemsa protocol. The analyzer performs the acquisition and preclassification of cells and the operator subsequently verifies and modifies, if necessary, the suggested classification of each cell ( Figure 2). The operator can also introduce additional observations and comments when needed. For this reason, persons specially trained in the use of this instrument and skilled in the recognition of cells can operate the DM96 TM . The system makes the following WBC classifications: band neutrophils, segmented neutrophils, eosinophils, basophils, monocytes, lymphocytes, promyelocytes, myelocytes, metamyelocytes, blast cells, variant form lymphocytes and plasma cells. The system also preclassifies non-WBC into the following classifications: erythroblasts, giant thrombocytes, thrombocytes aggregation, smudge cells and artifacts. 'Unidentified' is a class of cells and objects that the system cannot identify. The system has four flag levels for the following RBC morphological characteristics: polychromasia, hypochromasia, anisocytosis, microcytosis, macrocytosis, and poikilocytosis. Besides the WBCs mentioned above, the operator or 'user' can reclassify cells into the following classes afterwards: immature eosinophils, immature basophils, promonocytes, prolymphocytes, large granular lymphocytes, hairy cells, Sezary cells, others, megacaryocyte, not classed, and 15 user-defined classes. The operator can also add the following characteristics for RBCs: schizocytosis, helmet cells, sickle cells, spherocytosis, elliptocytosis, ovalocytosis, teardrop cells, stomatocytosis, acantocytosis, and echinocytosis, Howell-Jolly bodies, Pappenheimer bodies, basophilic stippling, parasites, and 10 other definable characteristics.
Smears and stains
Slides analyzed by the DM96 TM were prepared with SP-100 SYSMEX TM from venous blood sample collected in EDTA-type anticoagulant and previously analyzed with XE-2100 SYSMEX TM . Staining program and reagents were as follows: May Grunwald (MG) and Giemsa (Biolyon, France), MG pure time: 2.5 min, MG dilute time: 3 min, Giemsa time: 7 min, rinse 0 min and drying time 5 min.
Patients
Four hundred and forty nonselected patients processed with the XE-2100 were analyzed by medical technologists experienced using both conventional microscopy method and DM96 TM . All these samples were abnormal according to routine laboratory criteria and hence, justified a manual smear review (quantitative abnormality, qualitative flag from XE-2100 TM , malignant hematological disease). Under the microscope, 100 leucocytes were observed for establishing the control differential and a mean of 110 leucocytes were required for DM96 TM .
To analyze the performance of automated microscopy with DM96 TM and measure its impact on laboratory organization and workflow, we studied its ability to correctly identify blood cells and accuracy compared with manual method and/or XE-2100 TM . Finally, we analyzed the sensitivity for detection of pathological cells in case of hematological disease.
Efficiency of cell recognition
We analyzed the accuracy in classifying normal and abnormal cells for routine parameters by DM96 TM (neutrophils, eosinophils, basophils, lymphocytes, monocytes, immature granulocytes, and erythroblasts) out of 62904 cells [including Nucleated Red Blood Cells (NRBCs) and smudge cells] issued from the 440 patients analyzed. Efficiency of recognition has been calculated for each cell category. Unidentified cells from DM96 TM have been studied as well and their influence on the above-mentioned result calculated.
Comparison with manual method
We did not test normal blood samples in this study as this had already been performed and the results showed DM96 TM to be reliable and accurate (Ceelie, Dinkelaar & van Gelder, 2007). The results of 356 patients with no hematological disease but laboratory flagging criteria obtained on DM96 TM were compared after medical technologist reclassification with the manual differentials performed by the same user and to XE-2100 TM . Correlation between DM96 TM and both manual count and XE-2100 TM result was established for neutrophils, eosinophils, basophils, lymphocytes, monocytes, erythroblasts, and immature granulocytes (including metamyelocytes, myelocytes, and promyelocytes). In case of disagreement, a clinician reanalyzed both the slide and the validation issued from DM96 TM .
Malignant hematological diseases
We focused then on 84 patients with malignant hematological disorders from various types. The classification of these 84 patients was made according to the WHO criteria (Harris et al.,1999) and is described in table 1. Blast recognition and quantification by DM96 TM was studied in 34 patients, acute lymphoblastic leukemia (ALL), acute myeloid leukemia (AML) or chronic myeloproliferative disorders/myelodysplastic syndromes (CMPD/MDS). Three patients were excluded for the analysis in the absence of blasts cells in the peripheral blood. All these three patients had myelodysplasic syndromes (MDS). For all other patients, B-cell chronic lymphocytic leukemia (CLL), other B-cell chronic lymphoproliferative disorders (B-CLPD), we focused on capacity of DM96 TM to efficiently recognize mature cells and provide images permitting an easy and reliable morphological classification.
Statistical analysis
Statistical analysis was performed using Microsoft Ò Excel software. For correlation analysis, we used twotailed paired t-tests to evaluate differences between the percentage of blast cells detected by DM96 TM and manual microscope in patients with blast cells in the peripheral blood. Clinical sensitivity and specificity of the CellaVision DM96 TM were defined as its ability to obtain positive and negative results concordant with medical technologist before and after classification of unidentified cells by DM96 TM .
R E S U L T S Accuracy of cell recognition
Only 2.6% of cells are not identified (especially NRBCs and immature granulocytes) leading the global efficiency of DM96 TM to 95% of direct correct identification. In total, when reclassifying unidentified cells by medical technologist, accuracy is judged excellent up to 98%. For most common parameters, false positive and false negative ratio are very good (Table 2).
Comparison with manual method
Correlation for DM96 TM results with the manual method and/or XE-2100 TM is excellent for neutrophils, lymphocytes, eosinophils, and acceptable for immature granulocytes, erythroblasts, and for basophils. The correlation observed for monocytes was not as good as expected; both results (DM96 TM and optical manual count) were usually lower than the automatic count but this was not clinically relevant.
Patients with blasts on smear
Whatever the pathology (AML, ALL, and CMPD/ MDS) and the number of blasts on smear, all 34 patients were positive for blast detection on DM96 TM . Additionally, it appears very easy to distinguish myeloid blasts from lymphoid blasts. Despite DM96 TM underestimates the number of blasts cells (especially in ALL where a huge number of them is misclassified as lymphocytes), after manual validation, the correlation with microscope appears very good ( Figure 3). In patients with chronic myeloproliferative disorders or myelodysplastic syndromes, both the quantitative and qualitative analysis of immature granulocytes was comparable with the one observed for routine patients. Basophiles were clearly identified even when they show abnormal aspects. Blast count for these patients appeared to be reliable.
B-cell chronic lymphoproliferative disorders
DM96 TM classifies CLL cells without problem but often provides a wide count of 'smudge cells' arising from the smear method. The recommended procedure in this case is still to use the lymphocyte count from the analyzer as the most reliable result. Concerning other lymphoid pathologies summarized in
D I S C U S S I O N
The DM96 TM has already proven to be reliable for normal patient samples. In this study, we became aware of the capabilities and limitations of the automated microscope DM96 TM for analyzing patient samples with quantitative and qualitative abnormalities detected by XE-2100 TM . From a biological point of view, we as others (Swolin et al., 2003;Kratz et al., 2005;Roumier et al., 2005;Contis & Williams, 2006, 2006 demonstrated that the results obtained from DM96 TM correlated well to those obtained by manual counting of all patient samples, suggesting that DM96 TM may be useful for the analysis of the great majority of parameters tested. About monocytes, heterogeneous distribution on slides is a well-known problem arising directly from the smear method and consequently, both the choice of the observation area (which can be different for DM96 TM and manual microscopy) and the number of cells analyzed contribute to the difference in monocyte counts. Modern characterization of acute malignant hematological disease is a multidisciplinary process. Initially, it requires the integration of clinical, morphological and cytochemical information. A correct and rapid hematological evaluation is necessary to follow-up with the appropriate laboratory tests, specifically immunophenotyping, metaphase cytogenetics and molecular studies. However, the interpretation of morphological and cytochemical stains remains central to the diagnosis and classification of AML. The identification of multilineage dysplasia is entirely dependent on light microscopic assessment of the leukemia cells. Despite the advances in diagnostic technologies, the maintenance and improvement of morphological skills still remain essential requirements in the diagnosis of AML. In case of AML, DM96 TM is able to detect blast cells and to identify myeloid blast cells and maturing cells. In addition, it facilitates the evaluation and quantification of dysplasia on a high number of myeloid cells preselected by DM96 TM , which are then classified and eventually properly reclassified by the biologist. In cases of B-ALL and more generally in cases of myeloperoxydase (MPO) negative blast cells, immunophenotyping is always still required for the initial diagnosis. DM96 is also able to detect blast cells but is unable to classify blast cells as lymphoblasts. In this context, it makes sense to rely upon conventional microscopy. If the sample quality is poor, it will be also necessary to survey the entire smear in manual mode. For post-treatment monitoring of patients with malignant hematological disease, DM96 TM represents a good tool for the detection of abnormal cells but is not appropriate for quantifying blast cells in the peripheral blood of patients with B-ALL. In cases of B-CLPD and when operated by an experienced cytologist, DM96 TM is helpful for identifying the disease and especially the lymphoid abnormalities. As an example, we rapidly identified binucleated lymphocytes characteristic of polyclonal lymphocytosis with binucleated lymphocytes (Mossafa et al. 1999), hairy cells in patients with HCL and atypical lymphocytes in patients with B-CLL. An overview of all lymphoid cells is of great interest in lymphocyte analysis.
Finally, DM96 proved to be fully comparable with the manual method in a test using control patient blood smears and in daily practice applicable to >90% of the leucocytes review.
The validation and screening of abnormal smears is one of the core competencies of the technical staff, which is under the supervision of a biologist. For such a routine process, a significant timesaving could be realized by implementing such a reliable automatic system. Therefore, these observations should provide food for thought when considering modalities for improving the efficiency of a hematology laboratory.
The routine introduction of DM96 TM will probably have a great impact on the logistics and organization of both specialized and general hematology laboratories. Depending on the validation requirements and guidelines in each country, all the smears performed by SP-100 TM can be passed onto the DM96 TM in a continuous mode. If, after the unidentified cells have been identified, classified, confirmed and validated, there are no blasts present, the validation could be carried out by the DM96 TM , except when particular difficulties are encountered. For the technical staff, the installation of DM96 TM would have many consequences: the reduction of technical staff time at the microscope while simultaneously increasing the efficiency of the workflow, the elimination of medical technologists facing a difficult diagnosis alone, improved ergonomics of the workstation (elbows, eyes, and back), reduction of the filing of the blades and finally, optimization of time and quality. The introduction of DM96 TM could also optimize the time of the biological staff and improve the proficiency of morphological expertise. In addition, the easy and clear presentation of all patient samples on a computer screen will help ensure the quality of follow-up care in cancer patients. The images can be transmitted to other experts for consultation and confirmation and will facilitate validation of clinical protocols. We also can hope for shorter response times, a reduction in errors, an improvement in continued and advanced education, possibly a redeployment of human resources and a significant cost reduction for a hemogram.
|
v3-fos-license
|
2018-06-25T09:25:39.975Z
|
2012-01-01T00:00:00.000
|
155516611
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.5922/2079-8555-2012-1-4",
"pdf_hash": "e5b9347a570064de293401b334d91e16daef7414",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46505",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "17176cf1b1aebd1dc334fcb51f0a21e265276382",
"year": 2012
}
|
pes2o/s2orc
|
ASSESSMENT OF THE EFFICIENCY OF RUSSIAN RESPONSE TO THE IMPLEMENTATION OF US MISSILE DEFENCE DEPLOYMENT CONCEPT IN EUROPE
This article is dedicated to the problems of deployment of the US anti-missile defence system in Eastern Europe. The European system of US missile defence is just one of the components of global US missile defence. This work aims to analyze possible Russia’s responses within military and political spheres. The measures proposed are divided into three subgroups: soft, medium and hard depending on the implementation of the adopted missile defence concept by the USA. This research employs the structure-system method and the method of actualization. The authors outline both positive and negative consequences of such actions for the Russian Federation, the USA, eastern European countries and the neighbouring countries, including the Baltic Sea states. The practical significance of this study consists in the proposed and justified responses of the Russian Federation that may serve as a basis for the scenarios of development of international situation and help to forecast the level of tension in Russia-US relations.
The idea of creation of a new anti-missile defence concept voiced by President George W. Bush in 2001 [19] and the concept of US missile defence deployment in Eastern Europe presented by the Presidential Administration in January 2006 [15] caused severe criticism from the Russian Federation and its alliance partners.
Despite the constant statements of the US Presidential Administration that the deployment of AA radars and an anti-missile defence system in Eastern Europe is by no means a move against the Russian Federation, Russia's gov- ernment believes that the USA pursues the above-mentioned goal. The defence concept aims to create "the system of non-nuclear means designed to counter ballistic missiles of all ranges -short, medium, intermediate and long" [15, p. 2]. The new anti-ballistic missile defence system (ABM) was originally planned to have the form of a triangle with one angle situated is Eastern Europe and two others in the USA -in Alaska (Fort Greely) and in California (Vanderberg). Anti-ballistic missile defence components would destroy hostile warheads in the terminal phase with the Patriot AD Weapon System. Missile defence facilities deployed in Eastern Europe would be employed for target detection and destroying of ballistic missiles at the ascent and midcourse stages. Space tracking and surveillance system should destroy warheads in the midcourse phase [10]. After Barack Obama and his Administration came to power in 2009, the initial plans of George W. Bush were reconsidered and adjusted. They then provided a basis for the new European NATO Deployment Concept which should be implemented in four stages. The US ABM is to be put into full operational service in 2018 but starts to operate as early as May 2012 [11].
In this connection the Russian Federation is ready to take any possible steps to prevent the escalation of threats to its international security. The range of these steps depends on the actions of the USA on the deployment concept implementation.
In the given case all possible Russian measures in response to the implementation of the US missile defence deployment concept can be grouped according to two spheres -political and military (Tables 1 and 2).
Political measures are soft, medium and hard, and aimed at developing an alternative to ABM or creating an alliance with Russia's partner-countries as well as ensuring legality of Russian response to the threat from the Third Site countries.
Military measures provide security to the RF, and are aimed at the development of a collateral defence system and the upgrading of aerospace defence capacity of the Russian Army.
Continued talks and negotiations with the US Administration on inadmissibility of the ABM This working group should concentrate on fulfilling the following objectives: 1. Development of negotiation proposals on a number of topics, including, but not limited to: utilization of already existing US and RF ABM defence systems (radar stations in Gabala, etc.), joint participation of the US and Russia in the development of ABM defence.
2. Discussing counter-offers proposed by the US Administration and/or Working Group.
3. Discussing issues arising from the conflicting views of the parties. 4. Keeping the public informed about the negotiations. 5. Evaluation of existing and perspective US and Russian ABM defence systems, forecasting of the outcomes of agreements between Russia and the US and of US ABM defence deployment in Eastern Europe (for Russia). 9. Developing alternative ways of cooperation between Russia and the US regarding ABM defence.
Developing proposals on a new ABM treaty between Russia and the United States. If a joint ABM system is developed, the new treaty should include obligatory clauses defining joint efforts of both countries in this area.
Developing proposals on the new ABM treaty with the USA drawing on the positive experience of the USSR-USA ABM Treaty of 1972. The following provisions (on the agreement of parties) should be included in the treaty: Each Party undertakes 1. not to give missiles, missile systems, surface-mobile ABM defence systems, seaborne multifunctional combat information control systems, longand medium-range ABM interceptor missiles and other elements and components of ABM system, capabilities to counter strategic ballistic missiles or their elements in flight trajectory, and not to test them in an ABM mode; 2. not to give missiles, launchers, or radars, other than ABM interceptor missiles, ABM launchers, or ABM radars, capabilities to counter strategic ballistic missiles or their elements in flight trajectory, and not to test them in an ABM mode; 3. not to create, test or deploy the ABM system or its components on its land, air, space and surface-mobile bases, excluding those already deployed or in the process of deployment; 4. not to create, test or deploy ABM launchers capable of launching more than one ABM at a time; not to modify already deployed launchers to give them such capability; and not to create, test or deploy automatic or selfloading devices designed to speedily reload missile launchers; 5. to keep the existing ABM systems under certain conditions (determine the conditions of conservation of the ABM system during additional talks or keep the conditions of 1972).
Conducting international hearings (in the EU, SCO, CIS, Disarmament Committee of UN General Assembly, CSTO, EurAsEC) on disarmament and missile defence issue with compulsory achievement of joint political agreement.
Already on the 15 th of June 2011, during the anniversary SCO summit in Astana, the participants of the summit adopted the Astana Declaration, in which the leaders of the SCO states (Russia, China, Kazakhstan, Uzbekistan, Tajikistan and Kirgizia) have condemned US plans of global ABM defence system deployment: «The member states believe that the unilateral and unrestricted build-up of a missile defence capability by one state or a group of countries can hurt strategic stability and international security» [9]. In his interview, the Russian Minister of Foreign Affairs, Sergey Lavrov, said that the criticism is directed not only against the deployment of Euro-ABM, but against the "global ABM system that is being deployed by the USA all over the world, even in South-East Asia" [17].
Conducting negotiations with Iran on the non-proliferation of nuclear materials in military purposes, and on the inadmissibility of development of long-range nuclear missiles, on the possibility of further IAEA control over the nuclear programme implemented by Iran, on the necessity to fight nuclear terrorism and on the necessity to control the spread of missile technologies and to join the Missile Technology Control Regime.
Coordinating the ABM defence issue with the key issues of disarmament and non-proliferation across the world and in Europe will complement the already existing political measures. The possibility to refuse further disarmament and non-proliferation, and the possibility of Russian exit from the SNF-3 Treaty (Nov 23, 2011) was underlined by the President of the Russian Federation in his statement on the Euro-ABM deployment in Europe [5].
Convincing Eastern European Countries (Romania, Poland and Czech Republic) that there is a possibility of the following negative effects of the US ABM defence system deployment (to be achieved through public media, diplomatic, private and political channels of influence): -cooling down in political and economic relationships with Russia; -cooling down of relationships with the states involved in the nuclear debate; -discord within the EU, change of EU authority and introduction of new Members; -emergence of strategic secret objects that will disrupt the relaxed European lifestyle; -increased US military contingent in the Third Site states; -involvement in the arms race between the USA and Russia with possible additions of third countries; -destruction of targets, including missile attacks, in their territory; -radioactive poisoning of the territory after such attacks. We also call for support (media and financial support) of NGOs, unions, influential individuals who used to take prominent positions in the governments of Third Site countries, public intellectuals, and regular citizens who wish to protest the ABM deployment plans. Since some of the Eastern European countries also lie in Central Europe (Poland, Romania, Czech Republic), their general attitude and reaction can greatly influence the geopolitical situation, preferably in the interests of Russian Federation.
Introduction of the stipulation on possibility of a preventive strike (possibly with battle field nuclear weapons) on the objects of the 3rd missile launching area of the US ABM Defence into the Russian Federation Military
Doctrine. At present The Military Doctrine of the Russian Federation approved by Russian Federation Presidential Edict on 5 February 2010, reserves the right for Russia "to utilize nuclear weapons in response to the utilization of nuclear and other types of weapons of mass destruction against it and (or) its allies, and also in the event of aggression against the Russian Federation involving the use of conventional weapons when the very existence of the state is under threat" [3]. Thereat among the main external military threats it counts "the desire to endow the force potential of the North Atlantic Treaty Organization (NATO) with global functions carried out in violation of the norms of international law and to move the military infrastructure of NATO member countries closer to the borders of the Russian Federation, including by expanding the bloc", as well as "the creation and deployment of strategic missile defence systems undermining global stability and violating the established correlation of forces in the nuclear-missile sphere, and also the militarization of outer space and the deployment of strategic nonnuclear precision weapon systems" [3]. At the same time, "the Russian Federation's military policy is aimed at preventing an arms race, deterring and preventing military conflicts, and improving <…> means of attack for the purpose of defending and safeguarding the security of the Russian Federation and also the interests of its allies" [3]. Besides, Russia's paramount task is preventing a nuclear military conflict, and among the main goals in preventing and deterring military conflicts it holds "creating mechanisms for the regulation of bilateral and multilateral cooperation in the sphere of missile defence" [3].
Seeking coalition with states holding similar views on the 3rd missile launching area of the US ABM Defence (China, Kazakhstan, Uzbekistan, Tajikistan, Kyrgyzstan) and cooperation at the international scene. Extending the coalition is possible through states that hold neutral (or indifferent) views on the said question or who have not yet defined their views. On October 4, 2011 it was reported that Russia and the Ukraine held negotiations on the cooperative missile shield system [16]. Head of the Ukrainian Mission in NATO, Ambassador extraordinary and plenipotentiary Igor Dolgov said that the Ukraine would participate in the NATO ABM Defence only if Russia joined it [18].
Depending on the political situation the policies also include Russia's dismissal, consent or partial consent with the US proposals concerning the ABM.
Development of the military technical proposals to create an alternation ABM variant with the Russian Federation Armed Forces taking the lead:
-the US ABM system utilizing information from Russian missile attack warning facilities (the radar locator station in Gabala (Azerbaijan) and other areas) on the situation with possible nuclear missile attack forces; -deployment of the Russian Federation ABM system aerospace target weapons in the southern borderline areas of Russia and other missile threat directions; -forming a joined interface for the information management systems of Russia and the US ABM Defence.
At the Lisbon Summit that took place in November 2010 Russia and the US agreed to continue discussing future cooperation on ABM Defence. Russia proposed to create a sectoral ABM defence system, according to which missiles that would fly over Russia towards NATO members would be exterminated by the Russian forces. In its turn NATO must destroy missiles traversing territories of NATO members and aimed at Russian facilities. Besides, the parties will not aim their ABM facilities at each other and deploy them at the shared borders [4]. However, at the meeting of the Russian President with NATO administration held in July 2011 in Sochi and the visit of the Russian Minister of Foreign Affairs Sergey Lavrov in Washington this idea was rejected. And at the Russia -NATO Council session held on December 8, 2011 the parties did not achieve any progress on this issue. The next NATO summit session will take place in May 2012 in Chicago. Its results may define the international situation.
Constant monitoring by extraterritorial surveillance facilities (space facilities for Earth's remote probing, like "Resurs-DK") of the Third Site of the US ABM Defence system facilities, both functioning as well as being under construction), and updating their location and creating their 2D images to input into the fire weapons guidance system. "Resurs-DK" makes it possible to obtain detailed images of the facilities and transmit the information through a radio channel to the Earth.
Precluding supplies of equipment and technologies for manufacturing nuclear missile weapons in the third countries. Russia may join in sanctions against Iran and Democratic People's Republic of Korea and discontinue its peaceful programme.
Deployment of means of destruction (short-range missiles "Iskander") within reach of the 3rd missile launching area of the US ABM Defence system (Kaliningrad region, Russian Federation regions bordering on the Third Site) that will not demand any substantial expenses and will take the form of an asymmetrical response. In 2008 in his first Address to the Federal Assembly the Russian President announced a possibility of installing "Iskander" missiles in the Kaliningrad region if need there be [13]. On the 23rd of November, 2011 Dmitry Medvedev in his special address to the citizens of the Russian Federation confirmed the intention to deploy the "Iskander" missile complex in the Kaliningrad region and strike systems in the western and southern areas of Russia if necessary [5]. The Russian President's allegation to deploy operational "Iskander" and making radar locator stations in Kaliningrad operational provoked a negative response from the Baltic States. Lithuanian Prime Minister Andrius Kubilius reassured Russia of NATO ABM Defence system not being targeted at Russia and mentioned that "it must be taken seriously, but… to assure Russia that it should not act so belligerently, we must… together with the NATO partners" [14]. At the same time "Latvian Defence Minister Artis Pabriks asked the authorities of the Latvian National Defence Forces to estimate Medvedev words "from the standpoint of the military threat" [14].
Research and Development of new means of destruction (suppression) of the Third Site facilities (for example, precision weapons, radioelectronic countermeasures means, aerospace jamming etc.). This will probably require introducing corrections into the Government military contract. In 2009-2010 the Ministry of Defence already contracted research and development works on the creation of complex specimens for the Strategic Missile Forces and Aerospace Forces; these works can be financed from the military spending budget under the Federal Armament Programme for 2011-2020 [2]. In the above-mentioned Presidential Address (Nov. 23, 2011), Dmitry Medvedev tasked the Military Forces of the Russian Federation with "developing measures ensuring the destruction of control and information-transfer system within the ABM defence, should such need ever arise" [5].
Further development of the Russian aerospace defence system (data transmission and strike capacity, methods and forms of overcoming ABM, etc.) within the framework of the new strategic partnership between the Russian Military Forces and its Aerospace Forces. As of December, 2011, the troops of the new Aerospace Defence Forces, created in accordance with the Presidential decree, took up their duty. Shortly before that, another Presidential decree (of Nov.29 th , 2011) introduced the system of missile attack detection, a radar location station "Voronezh DM" into the military facilities of the Kaliningrad region [6]. According to Dmitry Medvedev, Aerospace Forces will help to increase the protection of strategic nuclear objects [5]. "From the military and geopolitical standpoint, the aerospace defence is a valuable tool of keeping geopolitical balance in the modern world. From the strategic standpoint, it is the main guarantee of ensuring that the President of Russian Federation -and Supreme Commander-in-Chief -receives correct and relevant information about the airspace situation and is thus able to make strategic decisions" [8, с. 46].
Development of new means to overcome the ABM defence of the USA, of the forms and technologies of their combat applications. "Bulava ICBM", for example, carried by the nuclear submarines, has the launching radius of 8000 km, and its main advantage is that it contains individual nuclear maneuvering homing devices that are able to change the height and trajectory of the flight [1]. In his November Address, Dmitry Medvedev underlined that the strategic nuclear missile available to the Russian Military Forces and Strategic Missile Forces "will be equipped with the top means of overcoming ABM defence and with new, highly efficient warheads" [5].
In this article we propose a number of development scenarios for Russian response to the ABM system deployment -each of those scenarios can be triggered and put into action depending on the actual steps implemented by the US Administration.
|
v3-fos-license
|
2018-12-05T09:57:26.233Z
|
2017-11-14T00:00:00.000
|
55139279
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=7508&context=kaesrr",
"pdf_hash": "346d101900aea3528042c05b46ce31d093cea454",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46509",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "cd7c233fd39ec84b8ef9f00eb401ce272aa6efd3",
"year": 2017
}
|
pes2o/s2orc
|
Genome Diversity and Molecular Detection of PRRS Field Strains Genome Diversity and Molecular Detection of PRRS Field Strains and Vaccine Strains, and PCV3 and PCV2 Strains and Vaccine Strains, and PCV3 and PCV2 Strains
Summary Molecular diagnosis of porcine reproductive and respiratory syndrome virus (PRRS) and porcine circo virus (PCV) are challenging due to high genetic diversity in the viral genomes. Differentiating PRRS vaccine strains is even more challenging and is currently done by DNA sequencing, which is expensive and time-consuming. A multiplexed system (Luminex) allowing multiple detection targets in the same reaction is available. However, this system is not fully developed for common swine pathogens. Therefore, an assay was built to detect the majority of field PRRS strains by using different pairs of primers, and at the same time, to provide differentiation of the four PRRS vaccine strains used in the US by using vaccine-specific primers. Two sets of detection primer pairs were used that detect 85.4% and 91.2% of the 694 full genomes of the current PRRS collection in the GenBank. The combination of the 2 primer pairs will detect 98.1% of the genomes in the GenBank. Testing a limited number of field strains and the four vaccine strains (PrimePac, Ingelvac MLV, Ingelvac ATP, and Fostera) available in the US indicated that the assay detected all strains and identified each of the four vaccine strains correctly. Recently, clinical signs of porcine dermatitis and nephropathy syndrome, reproductive failure and multisystemic inflammation have been associated with PCV3. A real-time PCR assay is developed based on 67 PCV3 full-genome sequences with 100% detection rate. Also, 1,907 available PCV2 genomes were analyzed. Based on this analysis, 2 primer pairs were designed to detect an estimated 94.8% and 90.5% field strains, respectively, with a combined detection rate of 99%. The PCV3 and PCV2 assays were then combined into one reaction with an internal control to monitor the DNA extraction efficiencies. The combined multiplex assay detected all PCV3 and 99% of PCV2 strains with no cross-detection observed
Summary Molecular diagnosis of porcine reproductive and respiratory syndrome virus (PRRS) and porcine circo virus (PCV) are challenging due to high genetic diversity in the viral genomes. Differentiating PRRS vaccine strains is even more challenging and is currently done by DNA sequencing, which is expensive and time-consuming. A multiplexed system (Luminex) allowing multiple detection targets in the same reaction is available. However, this system is not fully developed for common swine pathogens. Therefore, an assay was built to detect the majority of field PRRS strains by using different pairs of primers, and at the same time, to provide differentiation of the four PRRS vaccine strains used in the US by using vaccine-specific primers. Two sets of detection primer pairs were used that detect 85.4% and 91.2% of the 694 full genomes of the current PRRS collection in the GenBank. The combination of the 2 primer pairs will detect 98.1% of the genomes in the GenBank. Testing a limited number of field strains and the four vaccine strains (PrimePac, Ingelvac MLV, Ingelvac ATP, and Fostera) available in the US indicated that the assay detected all strains and identified each of the four vaccine strains correctly.
Recently, clinical signs of porcine dermatitis and nephropathy syndrome, reproductive failure and multisystemic inflammation have been associated with PCV3. A real-time PCR assay is developed based on 67 PCV3 full-genome sequences with 100% detection rate. Also, 1,907 available PCV2 genomes were analyzed. Based on this analysis, 2 primer pairs were designed to detect an estimated 94.8% and 90.5% field strains, respectively, with a combined detection rate of 99%. The PCV3 and PCV2 assays were then combined into one reaction with an internal control to monitor the DNA extraction efficiencies. The combined multiplex assay detected all PCV3 and 99% of PCV2 strains with no cross-detection observed.
Introduction
Real-time polymerase chain reaction (PCR) is the most-used platform for molecular diagnostics for animal and zoonotic pathogens. Simplicity of operation, fast turnaround time, and exponential amplification features have made PCR-based diagnostic applications widely applied. Most real-time PCR systems though, are limited to 5 channels of detections, and practically only 3 to 4 channels are used in multiplexed assays. In addition, viral genomes such as PRRS and PCV2 are constantly changing. It has been challenging to keep the PCR assays current with the increasingly divergent viral genomes. The fluorescent bead-based Luminex assays allow the use of 100 or more primer pairs in one reaction, providing the opportunity to build highly multiplexed assays for multiple pathogen detection with a single assay.
The most practical way to design a PCR assay to detect most field strains is to perform a thorough analysis on all genomic information that is available at the time of design. Then update the assay as needed based on viral genome changes over time. In this study, a multiplex Luminex assay was developed to detect and differentiate field PRRS strains and the 4 PRRS vaccine strains used in the US (Prime Pac, Ingelvac MLV, Ingelvac ATP, and Fostera). In addition, primers were included to detect PCV2 and PCV3 strains.
Procedures
There were 694 PRRS full genome sequences were available from the NCBI GenBank (https://www.ncbi.nlm.nih.gov/) at the time of design. The 694 sequences were downloaded and aligned in CLC Genomic Workbench (https://www.qiagenbioinformatics. com/products/clc-genomics-workbench). The aligned sequences were used to identify conserved regions for the detection primer design and used to identify regions specific to each vaccine strain for vaccine differentiation designs. Two pairs of detection primers were identified that could detect 85.4% and 91.2% of the 694 full genomes that when combined would detect 98.1%. Also, a pair of primers for each vaccine strain were identified. Magnetic beads with capture sequences that matched our selected primers were purchased from Luminex (https://www.luminexcorp.com/). Each capture sequence on the beads is synthesized for each primer pair. Also, the reverse primer for each designed primer pair is synthesized with a biotin attached to generate a signal for detection. The primers were then pooled and used for amplification. The beads were washed to remove unused primers, and amplified PCR products were purified and hybridized to the magnetic beads. Hybridized beads were run on a BioRad Bio-Plex 200 system to generate detection results.
The 35 PCV3 full-genome sequences that were available in the GenBank and the 32 PCV3 full genomes sequenced at the K-State lab 1 were aligned together with selected PCV2 genomes in CLC Genomic Workbench. The PCV3-unique regions that are conserved within PCV3, but have low homology to PCV2 were used for primer and probe designs. Currently 1,907 full genomes of PCV2 sequences are available in the GenBank. Due to the high diversity of the genomes, an assay designed from ORF1 that can detect 94.8% of the strains, and another assay designed in ORF3 that can cover 90.5% of the strains, were used. When used together, the two designs can detect 99% of the 1,907 sequences. A swine housekeeping gene that is always present in the pig genomes, SB2M, was also used as an internal control to monitor DNA extraction efficiencies. The PCV2, PCV3, and SB2M assays were then multiplexed into a single reaction and tested with viral isolates and field samples.
Results and Discussion
The two pairs of detection primers collectively detected all four PRRS vaccine strains, and the four PRRS positive diagnostic samples. They did not detect either of the four PRRS negative diagnostic samples or the negative control (Figure 1).
The PCV3/PCV2/SB2M triplex real-time PCR was analyzed under multiplexed conditions. The PCR amplification efficiency for PCV3 is 95.5%, and for PCV2 is 91.6%, which are within the general guidelines of 90-110%. The correlation coefficient for both PCV3 and PCV2 were greater than 0.99, which also meet the general criteria. Testing on 717 diagnostic samples indicated that the PCV3 was detected in 156 (21.8%) of the samples. Sequencing of 32 full genomes indicated that the PCV3 strains in the US have undergone changes in the genomes, yet at a slow pace. The current PCV3 diversity level is 2.8% with a minimal homology of 97.2%. Figure 2 shows the phylogenetic relationship among these strains. From 125 diagnostics samples, 18 were positive to PCV2 reflecting a 14.4% prevalence.
Based on these results the PRRS and PCV assays can be combined. An assay can detect many more targets than PCR-based technology in one assay. One limitation though, is that the assay is not able to quantify the viruses, and only a presence or absence of pathogen is reported. Also, as these pathogens evolve, or new vaccines are available, more primer pairs can be added to update the system. Additionally, with adequate genetic information, detection of other pathogens such as swine influenza virus can be added to the system.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2013-02-01T00:00:00.000
|
16665927
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://europepmc.org/articles/pmc3755535?pdf=render",
"pdf_hash": "665df9195c148a5ef50916620234ed8c9e032591",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46510",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "9e7936b83e8f68f648eb8e8ae51a7d6f04b4302b",
"year": 2013
}
|
pes2o/s2orc
|
Structure of rrn operons in pathogenic non-cultivable treponemes: sequence but not genomic position of intergenic spacers correlates with classification of Treponema pallidum and Treponema paraluiscuniculi strains
This study examined the sequences of the two rRNA (rrn) operons of pathogenic non-cultivable treponemes, comprising 11 strains of T. pallidum ssp. pallidum (TPA), five strains of T. pallidum ssp. pertenue (TPE), two strains of T. pallidum ssp. endemicum (TEN), a simian Fribourg-Blanc strain and a rabbit T. paraluiscuniculi (TPc) strain. PCR was used to determine the type of 16S–23S ribosomal intergenic spacers in the rrn operons from 30 clinical samples belonging to five different genotypes. When compared with the TPA strains, TPc Cuniculi A strain had a 17 bp deletion, and the TPE, TEN and Fribourg-Blanc isolates had a deletion of 33 bp. Other than these deletions, only 17 heterogeneous sites were found within the entire region (excluding the 16S–23S intergenic spacer region encoding tRNA-Ile or tRNA-Ala). The pattern of nucleotide changes in the rrn operons corresponded to the classification of treponemal strains, whilst two different rrn spacer patterns (Ile/Ala and Ala/Ile) appeared to be distributed randomly across species/subspecies classification, time and geographical source of the treponemal strains. It is suggested that the random distribution of tRNA genes is caused by reciprocal translocation between repetitive sequences mediated by a recBCD-like system.
INTRODUCTION rRNA genes are co-localized in rRNA (rrn) operons. The typical bacterial rrn operon consists of 16S-23S-5S rRNA genes. In addition, rrn operons may contain tRNA genes and regulatory regions. The rrn operons are highly transcribed in bacteria (Condon et al., 1992), especially during the exponential phase of growth and in fastgrowing bacteria. It is generally believed that bacteria with a short generation time have multiple rrn operons in the genome. Multiple copies of 16S and 23S rRNA genes in an organism are almost identical (Pei et al., 2009(Pei et al., , 2010, suggesting homogenization of rRNA genes through homologous recombination (Liao, 2000). The 16S and 23S rRNA genes are widely used in bacterial phylogenetic studies, but the 5S rRNA genes are too short to be useful for this purpose.
In addition to the rRNA genes, the rrn operons contain intergenic spacer regions (ISRs). The ISRs are not involved in ribosomal function, so they are not under functional constraints, resulting in higher ISR microheterogeneity among bacterial species and strains (de Vries et al., 2006;Gürtler, 1999). The 16S-23S ISRs vary in length, tRNA composition and intragenomic nucleotide diversity (Stewart & Cavanaugh, 2007), and have been used for bacterial identification, molecular typing (Indra et al., 2010;Sadeghifard et al., 2006) and evolutionary studies (Antó n et al., 1998).
In this study, we used the variation present in the rrn operons to assess evolutionary relationships among several pathogenic non-cultivable treponemes. The organisms studied comprised Treponema pallidum and Treponema paraluiscuniculi species and an unclassified simian isolate (Fribourg-Blanc). The species of T. pallidum comprised T. pallidum ssp. pallidum (TPA), T. pallidum ssp. pertenue (TPE) and T. pallidum ssp. endemicum (TEN), the aetiological agents of syphilis, yaws and endemic syphilis, respectively. T. paraluiscuniculi (TPc), the aetiological agent of rabbit syphilis, and the simian Fribourg-Blanc isolate are closely related to the T. pallidum spp. (Šmajs et al., 2011a).
Closely related spirochaetes in the genus Borrelia contain two distinct rrn operon patterns. Whereas Lyme disease agent (Borrelia burgdorferi sensu lato) harbours a unique operon composed of 16S-23S-5S-23S-5S rRNA genes, agents of relapsing fever carry an operon consisting of 16S-23S-5S rRNA genes (Fraser et al., 1997;Schwartz et al., 1992). Two typing systems have been developed using the 16S-23S ISR, which includes both the tRNA-Ala and tRNA-Ile genes (Bunikis et al., 2004;Liveris et al., 1996). The typing systems have been applied to differentiate species within B. burgdorferi sensu lato in North America (Bunikis et al., 2004), to study populations of tick-and bird-borne Borrelia garinii in Eurasia (Comstedt et al., 2009) and to study the association between the B. burgdorferi sensu stricto genotype and dissemination of infection (Hanincová et al., 2008;Wormser et al., 2008).
In this study, we compared the sequences of both rrn operons among pathogenic treponemes, comprising 11 strains of TPA, five strains of TPE, two strains of TEN, a simian Fribourg-Blanc isolate and a rabbit TPc strain. We also studied 16S-23S ISRs in 30 clinical samples positive for T. pallidum DNA.
Isolation of treponemal DNA. TPA Nichols and SS14, TPE Samoa D and CDC-2, and TPc Cuniculi A chromosomal DNA was prepared as described previously by Fraser et al. (1998) by extracting DNA from experimentally infected rabbits. Treponemes were purified by Hypaque gradient centrifugation (Baseman et al., 1974). Because a high input of DNA was required for the sequencing approach, wholegenome amplification (WGA) (REPLI-g Midi kit; Qiagen) was performed for TPA Nichols DNA according to the manufacturer's instructions. In addition, non-WGA DNAs from TPA Nichols and SS14, TPE Samoa D and CDC-2, and TPc Cuniculi A were used. The Philadelphia 1, Philadelphia 2, DAL-1, Mexico A, Bal 73-1, Grady, MN-3, Madras and Haiti B (TPA), CDC-1, CDC-2, Gauthier and Samoa F (TPE), Bosnia A and Iraq B (TEN), and Fribourg-Blanc (a simian T. pallidum) strains were obtained as rabbit testicular tissues containing treponemal cells. After brief centrifugation of the samples at 100 g for 5 min, the DNA enriched for bacterial cells was amplified using the REPLI-g Midi kit.
DNA sequencing. DNA sequencing of the XL-PCR products was carried out with a BigDye Terminator v3.1 Cycle Sequencing kit (Applied Biosystems) using a primer-walking approach. Additional internal oligonucleotide sequencing primers (see Table S1, available in JMM Online) were designed using Primer3 software (Rozen & Skaletsky, 2000). The LASERGENE program package (DNASTAR) was used to assemble the consensus sequences.
Phylogenetic analyses. In addition to the rrn operons investigated in the 20 strains (Table 1), the rrn operons of TPA Chicago (GenBank accession no. CP001752; Giacani et al., 2010) was included in the evolutionary analysis. Concatenated sequences of rrn1 and rrn2 operons (Table S2) were used for the construction of evolutionary trees using the neighbour-joining method (Saitou & Nei, 1987) in MEGA4 software (Tamura et al., 2007). The bootstrap consensus trees were determined from 1000 bootstrap resamplings. Branches with ,50 % bootstrap support were collapsed.
Analysis of clinical specimens. Skin and mucosal swabs were placed in a tube containing 1.5 ml sterile water and agitated for 5 min at room temperature. The swab was withdrawn and the supernatant was used for DNA isolation. Swab supernatant (0.2-0.4 ml) and whole blood (0.2-0.8 ml) were used for DNA isolation using a QIAamp DNA Mini kit (Qiagen) according to the manufacturer's Blood and Body Fluid Spin Protocol. To detect the presence of treponemal DNA in swab and whole-blood samples, a diagnostic PCR assay amplifying five different Treponema-specific genes including polA (TP0105 locus), tmpC (TP0319), TP0136, TP0548 and the 23S rRNA gene was performed. Amplification and subsequent sequencing of TP0136, TP0548 and the 23S rRNA gene have been used, although not for diagnostic purposes, for molecular typing of treponemal strains (Flasarová et al., 2006(Flasarová et al., , 2012Liu et al., 2001;Matějková et al., 2009;Woznicová et al., 2007).
The composition of 16S-23S ISR sequences in the rrn1 and rrn2 operons, encoding either tRNA-Ile or tRNA-Ala, was determined by another nested PCR. In the first step, each clinical isolate was tested in four parallel reactions with the following primer pairs D. Č ejková and others ( Fig. 1 and Table S3): RNA1Fb and RNA1-tRNA-Ile (first reaction), RNA1Fb and RNA2-tRNA-Ala (second reaction), RNA2Fc and RNA1-tRNA-Ile (third reaction) and RNA2Fc and RNA2-tRNA-Ala (fourth reaction). Using these primer sets, the PCR products revealed the position (rrn1 or rrn2) and composition (tRNA-Ile or tRNA-Ala) of the amplified rrn operon. In the second step of the nested PCR, the PCR product of the rrn1 (from the first and second reactions) region was amplified using TP0225-6aF and TP0225-6bR primers, whilst the PCR product of the rrn2 (from the third and fourth reactions) region was amplified with RNA2Fa and TP0225-6bR. The second step was not specific for the Ile/Ala or Ala/Ile rrn spacer pattern but improved the sensitivity of detection of the PCR product from the first step. Each PCR contained 0.4 ml 10 mM dNTP mix, 2 ml 106 ThermoPol Reaction buffer (New England BioLabs), 0.1 ml each primer (100 pmol ml 21 ), 0.1 ml Taq DNA polymerase (5000 U ml 21 ; New England BioLabs), 1 ml test sample and 16.3 ml PCR-grade water, giving 20 ml in total. PCR amplification was performed using a GeneAmp 9800 thermocycler (Applied Biosystems) with the following cycling conditions: 94 uC for 5 min; 40 cycles of 94 uC for 60 s, 72 uC for 20 s and 72 uC for 150 s; and a final extension at 72 uC for 10 min. The second step of the nested PCR used the same conditions but a lower annealing temperature of 67 uC.
RESULTS
Amplification and sequencing of the rrn operons Two rrn operons (16S-23S-5S) have been described in pathogenic Treponema genomes with the 16S-23S ISR comprising genes encoding tRNA-Ala or tRNA-Ile (Fraser et al., 1998;Fukunaga et al., 1992;Giacani et al., 2010;Šmajs et al., 2011b). Using XL-PCR, we amplified the rrn operons in 20 treponemal strains (Tables 1 and S2) comprising 11 strains of TPA, five strains of TPE, an unclassified simian isolate, two strains of TEN and a rabbit TPc isolate. XL-PCR products were obtained for all 40 investigated regions. However, the assembled sequence of the rrn2 operon of Iraq B (TEN) was repeatedly ambiguous at several positions, probably due to low DNA quality, so
D. Č ejková and others 200
Journal of Medical Microbiology 62 The size of sequence between the 16S and 23S rRNA genes (both excluded) varied based on the presence of the tRNA-Ile (117+74+111, in total 302 bp) or tRNA-Ala (116+74+122, in total 312 bp) gene.
Structure of rrn operons in treponemes
the Iraq B sequences were excluded from the construction of phylogenetic trees.
Sequence analysis of rrn operons
In the individual TPA genomes, the amplified rrn1 and rrn2 regions were identical for 5141 bp (Tables 2 and S2, Fig. 1) including the DNA regions 212 bp upstream of the 16S rRNA, the 16S rRNA (1537 bp), 23S rRNA (2951 bp), 5S rRNA (110 bp) and 23S-5S ISR (50 bp), and a region of 54 bp downstream of the 5S rRNA. Additional identical sequences were located within the 16S-23S ISR downstream of the 16S rRNA (120 bp) and upstream of the 23S rRNA (118 bp) genes (Fig. 2, Table 2). Alternative sequences within the 16S-23S ISR, encoding tRNA-Ile or tRNA-Ala, comprised an additional 64 or 74 bp, respectively (Fig. 2). To extend the comparative analysis over all available data, the TPA Chicago sequences of the rrn operons (GenBank accession no. CP001752; Giacani et al., 2010) were added to the sequences of the 20 strains used in this study.
When compared with the TPA strains, a deletion of 33 bp was found in homologous regions of the rrn2 region in the TPE, TEN and simian strains (Fig. 1), whilst the TPc strain contained a 17 bp deletion at the same position (Fig. 1). These deletions resulted in shortening (33 bp deletion) or truncation (17 bp deletion) of TP0266 orthologues. Among all investigated strains, in addition to the observed deletions, we found only 17 heterogeneous sites within the entire region, excluding the 16S-23S ISR encoding tRNA-Ile or tRNA-Ala. Sixteen sites were single nucleotide changes and one was a single base-pair deletion ( Table 2). The rrn1 operon of the reference TPA Nichols genome (GenBank accession no. AE000520.1; Fraser et al., 1998) showed a deletion within the 16S rRNA gene (data not shown), whereas all other strains, including the Nichols strain examined in our study, did not. This deletion may represent a sequencing error present in the reference Nichols genome, as dozens of such sequencing errors have already been confirmed (Giacani et al., 2012;Matějková et al., 2008). In contrast, a 1 bp deletion in the TPA DAL-1 genome, upstream of the 16S rRNA gene in the rrn1 operon, was repeatedly confirmed by Sanger sequencing. The identified nucleotide change at position 2104 of the 23S rRNA gene (differentiating the SS14 strains from other investigated strains) corresponded to the mutation causing macrolide resistance in treponemal strains (Stamm & Bergen, 2000).
All TPA strains differed from the other pathogenic treponemes by a nucleotide change at position 766 of the 23S rRNA gene. The TPE strains and the simian isolate Fribourg-Blanc could be distinguished from the other pathogenic treponemes by a single-nucleotide polymorphism (SNP) localized 93 bp upstream of the 16S rRNA genes. The TPE strains could be differentiated from the simian isolate by a nucleotide sequence change in the 23S rRNA gene (nt 458). The TEN showed a nucleotide change in the 16S rRNA gene, and TPc showed 12 nt changes in the investigated rrn sequences (Table 2).
Reciprocal translocation of tRNA genes
In contrast to the phylogenetically conserved SNP distribution in the repetitive sequences of the rrn operons, the genes coding for tRNA did not show the same evolutionary pattern (Table 2, Fig. 1). In this study, we observed two 16S-23S ribosomal ISR patterns. The spacer pattern Ile/Ala included the tRNA-Ile gene within the rrn1 region and the tRNA-Ala gene within the rrn2 region. The Ile/Ala pattern was observed in the following strains: TPA Nichols, Bal
D. Č ejková and others
73-1, Grady, SS14, Chicago, DAL-1, Philadelphia 1 and Madras; TPE Samoa D, CDC-1 and Samoa F; and TPc Cuniculi A. The reverse ISR pattern Ala/Ile consisted of the tRNA-Ala gene within the rrn1 region and the tRNA-Ile gene within the rrn2 region. The Ala/Ile pattern was found in TPA Mexico A, MN-3, Philadelphia 2 and Haiti B strains; in TPE Gauthier and CDC-2; in the unclassified treponeme Fribourg-Blanc; and in TEN Iraq B and Bosnia A genomes.
The concatenated rrn operons, excluding the tRNA genes and their vicinity, clustered according to the species/subspecies classification (Fig. 3a). The TEN Iraq B strain was omitted from the analysis because we were unable to obtain an unambiguous rrn2 operon sequence. Nevertheless, the rrn1 operon was identical to another TEN strain, Bosnia A. In contrast, the trees showing concatenated rrn operons including tRNA genes (Fig. 3b) branched according to the composition of tRNA in the individual rrn operons, and then according to the species/ subspecies classification. This phenomenon can be explained by recombination events that have occurred between rrn operons.
To predict recombination hot-spot sites within the rrn operons, four methods from the RDP3 program were applied. All four methods predicted four recombination sites (Table 3), two sites in each rrn operon. The predicted sites corresponded to the same positions within the 16S (nt 783) and 23S (nt 324) rRNA genes in both rrn operons.
Structure of the rrn operons in clinical isolates containing T. pallidum DNA The composition of the 16S-23S ribosomal ISR (Ile/Ala or Ala/Ile spacer pattern in the rrn operons) was tested in 30 recently isolated clinical samples (Flasarová et al., 2012). The results are summarized in Table 4. Only the Ile/Ala pattern was identified in all clinical samples tested, despite the fact that the clinical samples belonged to five different genotypes (Table 4), as revealed by CDC and sequencingbased typing (Flasarová et al. 2012;Pillay et al. 1998). Nevertheless, all clinical strain genotypes were similar to the SS14 strain genotype.
DISCUSSION
In this study, we examined the rrn operons in 20 pathogenic treponemal strains and 30 clinical isolates. All investigated strains contained two copies of the rrn operons. Two rrn operons with the same composition have also been described in other human and animal treponemes except for T. vincentii containing only one rrn operon (Fraser et al., 1998;Matějková et al., 2008;Seshadri et al., 2004;Stamm et al., 2002).
Our results confirmed that there is little diversity within rRNA genes and ISRs. However, our data showed that the rrn operon structure displayed blocks of conserved and polymorphic sites. The TPA DAL-1 strain showed a 1 bp deletion upstream of the 16S rRNA gene in the rrn1 operon. It is known that TPA DAL-1 grows more rapidly in rabbits than other pathogenic strains (Wendel et al., 1991), and it is possible that the different promoter DNA conformation may affect expression of the rrn1 operon. Gürtler & Stanisich (1996) Structure of rrn operons in treponemes Lebuhn et al., 2006), including for treponemal (Centurion-Lara et al., 1996;Stamm et al., 2002) and borrelian samples (Bunikis et al., 2004;Comstedt et al., 2009). Centurion-Lara et al. (1996) examined the TPA Nichols and TPE Gauthier strains, and no difference was found. However, they did not examine the genomic positions of individual 16S-23S ISRs. Interestingly, the 16S-23S ISR typing of Borrelia burgdorferi sensu stricto is in accordance with ospC gene typing (Hanincová et al., 2008;Wormser et al., 2008). The ospC gene, encoding a protein involved in the initiation of infection in warm-blooded animals, is located on plasmid DNA, whilst the rrn operon is on chromosomal DNA. Moreover, different 16S-23S ISR genotypes are associated with different degrees of invasivity (Wormser et al., 2008).
Despite the low heterogeneity in the rrn operons, two different ISR patterns were observed in the pathogenic treponemal samples. Whereas detection of specific nucleotide changes may be of interest in identification of treponemal diseases, the detection of tRNA genes in the 16S-23S ribosomal ISR appears to be of limited use in typing of clinical samples. All clinical samples showed the Ile/Ala spacer pattern in rrn operons, so the tRNA-Ile and tRNA-Ala genes are not useful for molecular typing of clinical strains, at least for treponemes present in the population of the Czech Republic.
Due to the conserved machinery of protein synthesis, rRNA genes are expected to be under strong purifying selection and are exposed to the intragenomic homogenization process via gene conversion (Liao, 2000;Nei & Rooney, 2005). Several studies (Acinas et al., 2004;Pei et al., 2009Pei et al., , 2010 have shown that homogenization of multiple rRNA genes is common among bacteria. In addition, Harvey & Hill (1990) successfully constructed several Escherichia coli strains with recombined inverted rrn operons; however, the recombinants tended to recover the original configuration. The rrn operons of treponemal strains are direct repeats: the tRNA-Ala gene is replaced by tRNA-Ile (and vice versa), and the recombination is a common event with no correlation to the otherwisedetermined phylogenetic relationship among tested treponemes. It has been postulated that recombination between direct repeats leads to the duplication or deletion of a repeat (Petes & Hill, 1988;Petit, 2005). Whereas tRNA-Ile (TP_t12) is a unique gene in sequenced treponemal genomes, there are three predicted tRNA-Ala genes (TP_t15, TP_t41 and TP_t45; Fraser et al., 1998). As both tRNA-Ile (TP_t12, GenBank accession no. AE000520.1) and tRNA-Ala (TP_t15, AE000520.1) genes need to be maintained in the genomes of pathogenic treponemes, reciprocal translocation, rather than gene conversion, appears to be the mechanism for the observed rrn heterogenity among tested strains. Such a process would
D. Č ejková and others
require double cross-overs in both rrn operons, and therefore is much less common than insertion/deletion or gene-conversion events (Harvey & Hill, 1990;Hashimoto et al., 2003). Predicted recombination hot-spot sites were located in the 16S and 23S rRNA genes, genes with two identical copies within every strain examined in our study.
During replication of direct-repeat regions, DNA polymerase might lead to strand slippage, thus collapsing a replication fork formation, and recombination enzymes are involved in the DNA repair machinery (Darling et al., 2008;Santoyo & Romero, 2005). Although only the recF recombination pathway was predicted in the TPA Nichols genome (Fraser et al., 1998), the recF pathway suggests the gene-conversion mechanism (Kobayashi, 1992;Takahashi et al., 1992). Therefore, the reciprocal recombination in pathogenic treponemes may be accompanied by crossingover, a repair mechanism implemented by the recBCD pathway in E. coli (Kobayashi, 1992). Recently, recBCD orthologues (addA and addB) were predicted for several investigated treponemal genomes (Č ejková et al., 2012;Giacani et al., 2012;Šmajs et al., 2011b), composed of TP0898 and fused TP0899-TP0900 orthologues. However, it would be extremely difficult to prove experimentally the recBCD-mediated crossing-over mechanism in T. pallidum.
In summary, two different rrn spacer patterns (Ile/Ala and Ala/Ile) seem to be distributed randomly across the time and place of original isolation of treponemal strains (e.g. Philadelphia 1 vs Philadelphia 2, CDC-1 vs CDC-2) and the laboratory that provided the treponemal material (Tables 1 and 2). This random distribution of tRNA genes is probably caused by reciprocal translocation between repetitive sequences mediated by a recBCD-like system.
|
v3-fos-license
|
2021-04-27T05:15:41.989Z
|
2021-04-01T00:00:00.000
|
233397237
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1944/14/8/1960/pdf",
"pdf_hash": "469f6f2c20ba27b7e3779e9771c9299f7ad40eaa",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46512",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Medicine"
],
"sha1": "469f6f2c20ba27b7e3779e9771c9299f7ad40eaa",
"year": 2021
}
|
pes2o/s2orc
|
The State of Starch/Hydroxyapatite Composite Scaffold in Bone Tissue Engineering with Consideration for Dielectric Measurement as an Alternative Characterization Technique
Hydroxyapatite (HA) has been widely used as a scaffold in tissue engineering. HA possesses high mechanical stress and exhibits particularly excellent biocompatibility owing to its similarity to natural bone. Nonetheless, this ceramic scaffold has limited applications due to its apparent brittleness. Therefore, this had presented some difficulties when shaping implants out of HA and for sustaining a high mechanical load. Fortunately, these drawbacks can be improved by combining HA with other biomaterials. Starch was heavily considered for biomedical device applications in favor of its low cost, wide availability, and biocompatibility properties that complement HA. This review provides an insight into starch/HA composites used in the fabrication of bone tissue scaffolds and numerous factors that influence the scaffold properties. Moreover, an alternative characterization of scaffolds via dielectric and free space measurement as a potential contactless and nondestructive measurement method is also highlighted.
Introduction
Tissue engineering revolves around exploiting biological and engineering fundamentals to collocate cells and scaffold materials in assisting tissue growth and recovery process. It is favorably viewed as a feasible method to overcome transplantation issues due to inadequacies alluded to donor tissues or organs [1]. The success of tissue engineering depends on how the technique addresses multiple challenges in the form of the cell technology field, wherein the aspects that need to be highlighted include cell sourcing, cell function manipulation, and the effectiveness of stem cell technology. The challenges also encompass construction technology, which is closely associated with designation, tissue engineering construction and delivery transports, as well as manufacturing technology that is customized to suit the clinical needs and acceptance by the body in terms of immune acceptance [2]. It is opined here that natural biological implementation may manage some There are several approaches in treating the diseased or lost tissue in patients, such as via in situ regeneration whereby the external stimuli or specific scaffolds induce the tissue formation and stimulation of own cells from the body, leading to local tissue regeneration [8]. Another approach is via freshly isolated or cultured cells implantation, as this will be carried out by direct injection of cells or small cellular aggregates either from donor [9] or patient [10] onto the damaged or lost region without involving the degradable scaffold. Moreover, treatment could also be done through in vitro growth of three-dimensional (3D) tissue from autologous cells within the scaffold and then proceeding with the implantation procedure upon maturity [11]. In the latter category, utilization of autologous cells for bone reconstructions would entail an augmentation of the local host cells and transplantation of cells.
The augmentation procedure can be further branched into membrane techniques, biophysical stimuli, and biological stimuli. The membrane technique is based on the guided bone regeneration principle (GBR), in which the deployment of the resorbable membrane creates a barrier, separating the bone tissue from the ingrowth of soft tissue, thus creating an unrestrained space that permits the growth of a new bone. This type of reconstruction is generally used to rectify maxilla and mandible structure in maxillofacial surgery [12]. GBR strongly depends on the defect size and geometry, within which lies some of its limitations.
On the other hand, biophysical stimuli refer to inducement by mechanical and electrical sensations as bone formation regulators. Various clinical trials have demonstrated the efficacy of exposure to electromagnetic field (inductive coupling, capacitive coupling, and composite) and mechanical stimulation (distraction osteogenesis, low-intensity pulsed ultrasound, fracture activation) in hastening the bone healing process, leading to several clinically approved practices by relevant authorities [13].
Biological stimuli are attributed to signaling molecule cytokines involved in intracellular communication activity control and immunological reaction direction. Specific to bone construction applications, the cytokines in question can be further distinguished as a group of growth factors (GF), contributing to the effect that can be viewed in the context of the growth factor network. Chief among the growth factors is the superfamily of transforming growth factor-beta (TGF-β) with its three isoforms, namely TGF-β1, TGF-β2, and TGF-β3. These isoforms are crucial for bone tissue cell proliferation, differentiation, and remodeling processes. TGF-β is in consolidation with other proinflammatory cytokines, GFs, and angiogenic factors, i.e., fibroblast growth factors (FGF1 and FGF2), platelet-derived growth factor (PDGF), insulin-like growth factors (IGF-1 and IGF-2), bone morphogenetic proteins (BMP) family, and extracellular non-collagenous bone matrix proteins, namely osteonectin (OSN, SPARC), osteocalcin (BGLAP), and osteopontin (OPN, SPP1), all, of which are synthesized during distraction osteogenesis [14].
Scaffolds can be categorized based on their composition, external geometry, macro and microstructure, interconnectivity, surface per volume ratio, mechanical capability, degradation, and chemical properties. Aforementioned, scaffolds are templates for cells, and they grant the surrounding tissue ingrowth after implantation. The scaffold architecture may influence cell parameters, such as cell viability, migration and differentiation, and the substituted tissue composition. Loads gained at the implantation site would be retrieved by bone tissue scaffolds and delivered to the surrounding tissue, and thus the bone tissue scaffold is required to be mechanically competent to absorb the load after implantation [15]. According to Ahn et al., the biomechanical properties of poly-(para-phenylene) (PPP) bone implants were evaluated based on finite element modeling. From the finite element model, upon the stress loading, the stress dissipation is uniformly distributed onto the porous PPP. The results suggest that the porous structure of PPP is capable of minimizing stress shielding. The enhancement in the biomechanical feature is mainly contributed by the mechanical interlocking between the interface of the bone and the porous implant [7]. Previously, nondestructive mechanical analyses were performed by computed microtomography (micro-CT) to evaluate the internal structure of the materials and the performance of the bone scaffolds [16]. The microstructure scaffold defects could also be closely examined through the finite element mathematical modeling as studied by Naghieh et al. [17]. Here, the effect of post-heating on the elastic modulus and compression test of scaffold samples was computed via numerical analysis, which observed its microstructural performance. Recognition in microstructural imperfection is crucial as it could alter the mechanical criteria of porous materials and their cellular lattice structures. Another work on the mathematical modeling in bone tissue scaffold was also implemented by Avilov et al. [18], wherein the stress-strain of the lower jaw prostheses that consider the geometry, properties of bone tissue and mastication activity of patients was calculated. Thus, sufficient porosity is required to ensure bone and vascular ingrowth concurrently with tolerable mechanical properties for load-bearing [16]. Hollister [19] underlined the significance of scaffold materials and porous structural designs that fall in the region of 10 µm to 100 µm to manifest temporary mechanical function, preserve tissue volume, and deliver necessary biofactors (stem cells, genes and proteins) for stimulating the tissue repair. To achieve these goals, the hierarchical porous structure of scaffolds must be altered to suit the desired mechanical strength and mass transport. Therefore, porous structures should ensure cell migration occurs while encouraging the transportation of nutrients and cell attachment. Meanwhile, scaffolds must be mechanically strong to maintain their structural integrity during cell culture [20].
The extracellular matrices (ECM) of bone tissue are composed of inorganic and organic phases. Hydroxyapatite (HA) is chemically and physically similar to the inorganic components of natural bones. It also has excellent biocompatibility, osteoconductivity, and bioactivity. These place HA as one of the best candidates for the inorganic phase of ECM. Additionally, HA has a Ca/P ratio that falls in the range of 1.50-1.67, which encourages bone regeneration [21]. HA by itself is brittle and difficult to shape, and thus biopolymer is usually added to enhance its strength, as proven in the previous studies [3,22]. The typical biopolymer for this purpose is collagen, which is relatively poor in its mechanical strength. However, there are a few ways to improve this shortcoming, such as cross-linking, gamma radiation, and carbodiimide addition [23]. Therefore, a biocompatible material with all criteria matching or surpassing collagen should be considered to fabricate an excellent bone tissue scaffold. The candidate biomaterial should preferably come from non-fossil or petroleum resources [24].
Gomes et al. [25] demonstrated that starch-based scaffolds supported attachment, proliferation, and differentiation of bone marrow stromal cells. Starch has been studied as one of the potential biomedical materials due to its low-cost, abundance in nature, excellent hydration, and high biodegradable property [26,27]. From the manufacturing point of view, starch is fascinating as it can be easily formed through conventional polymer processing techniques, such as extrusion, molding, thermoforming, and blowing [28]. The constraint of adopting starch relates to processing issues, low mechanical strength, and sensitivity to water. Several works to overcome these problems have been experimented with via additives and chemical modification. Previously, bone scaffolds were fabricated from a single material without assimilating with other types of biomaterial. Recently, natural or synthetic polymers were formulated with HA. One of the motivations was to add other types of biomaterial to improve porosity [29].
HA is the mineral form of calcium apatite with chemical formula as Ca 10 (PO 4 ) 6 OH 2 . It is the principal inorganic biomineral phase of the human hard tissue found in teeth and bone to the tune of 60-70 wt.% [30]. Its crystal structure is a hexagonal cylinder, and each unit cell is made up of 44 atoms (10Ca 2+ , 6PO 3− 4 , and 2OH − ) formed by a tetrahedral arrangement of phosphate (PO 3− 4 ), which constitutes the skeleton of a unit cell [31]. HA crystal system belongs to a hexagonal space group of P6 3/m. The space group comprises six-fold of c-axis and is perpendicular to three equivalent a-axes at an angle of 120 • to each other. The lattice parameters of a unit cell of HA are a = b = 0.9422 nm, and c = 0.688 nm, respectively [32]. Popular methods of HA synthesis included wet chemical precipitation, sol-gel method, hydrothermal method, and microwave irradiation method [33][34][35].
HA is a well-received bioactive material for biomedical applications in orthopedics and dentistry due to its various meritorious properties, such as excellent biocompatibility, bioactivity, and osteoconductivity [36]. HA has been implemented as a coating material for metallic biomaterials in the past decades [37]. Swain et al. [38] studied the HA-based scaffolds and showed that these scaffolds exhibited good bioactivity and bioresorbability during the in vitro assessment. As implants, in vivo and in vitro studies favorably indicated that synthetic HA could promote new cell differentiation and proliferation without causing any local and systemic toxicity or inflammatory responses [31]. Despite this, scaffold construction that combines biopolymer, such as starch with HA ceramic, is necessary to overcome the HA inherent material characteristics, whereby its hard but brittle nature severely limits its load-bearing applications and malleability into complex shapes and manipulation into defect specific sites [39].
Starch is the primary form of carbohydrate in plants. It can be sourced out relatively cheap due to its availability from diverse resources, such as roots (cassava, potatoes), crop seeds (rice, wheat, corns, peas), and plant stalks (sago) [40]. Starch content may vary between sources like grains (≈30-80%), legumes (≈25-50%), and tubers (≈60-90%) [41]. Starch consists of two polymers of D-glucose: linear amylose, which is essentially unbranched α[1 → 4] glycosidic linked glucan (20-30%), and a much larger, non-linear amylopectin (60-90%), which has chains of α[1 → 4] linked glucose arranged in a highly branched structure with α[1 → 6] branching links [42]. Native starch exists in the form of semi-crystalline granules with a complex hierarchical structure. Together, amylose and amylopectin make up 98-99% of these granules' dry weight, while the remaining fractions comprise lipids, minerals, and phosphorus in the form of phosphates esterified to glucose hydroxyls. Starch granules differ in shape (polygonal, spherical, lenticular) and size (1-100 µm in diameter). These traits depend on the content, structure and organization of the amylose and amylopectin molecules, branching architecture of amylopectin, and degree of crystallinity [43]. Native starch extracted from plants cannot tolerate extreme processing conditions, such as high temperature, freeze-thaw cycles, strong acid and alkali treatment, and high shear rates [42,44]. Nevertheless, processes, such as plasticization of starch [45] and compositing it with other materials, e.g., halloysite nanotubes (HNTs), will further reinforce the mechanical, thermal, and swelling properties of starch, resulting in a porous matrix with a promising potential for biomedical applications [46].
Starch/Hydroxyapatite Composite Scaffold
Previous work on tissue engineering has shown that nano-HA can improve the function of the scaffold by providing a much larger surface area [47]. Still, HA-based ceramic scaffold performance in treating bone defect is limited by its brittleness. Another problem associated with HA is that its degradation rate is difficult to control [48], which has imposed challenges in determining the scaffold suitability for implantation. As one of the most abundant natural biopolymers, starch has been considered a component of the scaffold composites in tissue engineering due to its biodegradability and biocompatibility. Cytotoxicity analysis performed on the starch/HA scaffolds shows that the scaffold did not induce toxicity to mammalian cells [49]. The incorporation of starch could reduce the brittle nature of the HA scaffolds. This is due to the helical structure of amylose in starch, which formed an open network structure when it is stretched. This network comprises the hydrophilic exterior surface and hydrophobic interior cavity, which interacts with HA nanoparticles. This interaction would consequently create adhesive forces between the polymeric network and HA nanoparticles, thus improving the strength of the HA scaffolds via interlocking mechanisms [39,50].
In the latest study by Beh et al. [51], the scaffold made of corn starch and nanohydroxyapatite (n-HA) composite has a network of macropores (200-600 µm) and micropores (50-100 µm). It has a high degree of interconnectivity, suggesting that highly porous cornstarch/HA endowed with good mechanical properties can be a potential biomaterial for bone tissue engineering applications. The combination of starch and HA can influence the mechanical properties of scaffolds through pore size manipulation. Therefore, the scaffold must be designed to meet specific porosity requirements to facilitate cell attachment and migration, apart from having sufficient mechanical strength to support newly generated tissues. These porosity requirements include the size of pores, the interconnectivity of pores, and distribution. Table 1 lists a number of significant studies pertaining to starch/HA composite bone scaffold. On the other hand, Table 2 indicates the pore size required to support the regeneration of bone tissues [52]. Cell proliferation, migration 100-1000 Cell growth and collateral bone growth >1000 Essential for maintenance and programming Several factors affect the properties of the fabricated scaffold. This includes the processing methods, botanical origin of biopolymer, composition of biocomposite, and sintering temperature. Based on these factors, the fabrication of a scaffold can be optimized to meet the desired porosity and strength. Studies by Gomes et al. [60] and Tiwari et al. [61] had focused on the effects of different processing techniques on the structural properties of a scaffold. The techniques investigated included extrusion by using blowing agents, compression molding, solvent casting and evaporation, in situ polymerization method, and particulate leaching (the example procedure is shown in Figure 2). It was demonstrated that, although the morphology and the mechanical properties of the scaffold were tailored via different processing techniques, the biocompatible behavior of the starch-based scaffold was not affected. The scaffolds fabricated via extrusion through the use of a blowing agent based on carboxylic acid by Gomes et al. [60] have been shown to produce pore sizes of 50-300 µm and porosity of 40-50%. An improvement in pore interconnectivity and pore size in the range of 100-500 µm was achieved when a blowing agent based on citric acid was used. Scaffold with a pore size of 10-500 µm and a porosity of 50% was also reported when fabricated via compression molding and particle leaching technique. Through this technique, the porosity of the scaffold was controllable by modifying the amount and size of the particle used. The authors' SEM images showed that solvent casting and particle leaching technique eventually resulted in the best pore interconnectivity compared to the earlier mentioned techniques, with a pore size ranging from 50-300 µm and porosity of 60%. This processing technique also allowed accurate control of desired porous structural properties by controlling the particles' amount, shape, and size. Larger scaffold porosity would allow more spaces for new cell growth, which was much more desirable.
Besides conventional melt-based processing techniques, advanced processing technologies, such as rapid prototyping [52], can also produce scaffolds with such accurate control of the scaffold properties at macro and micro scales. This is done with computeraided-design (CAD) modeling tools and 3D printing of the scaffold. Sears et al. [62] aimed to develop printing tools and suitable materials as bio-ink that might fulfill the requirement of a biocompatible scaffold. It was demonstrated by Sobral et al. [63] that the pore size gradient of scaffolds fabricated via rapid prototyping could increase the seeding efficiency from approximately 35% to 70%. Electrospinning is another advanced technology for scaffold fabrication, particularly for a design that involves nanofibers. The enhanced cellular activity was achieved by employing this technique, which was attributed to the enlargement of the scaffold surface area [64]. Electrospun nanofiber scaffolds based on HA and native cellulose had exhibited porosity in the range of 50 to 500 nm. The addition of nano-HA displayed an increment in the average fiber diameter [65]. Overall, these advanced technologies were proven to impart better control over the scaffold morphology and thus the functionalities associated with it as well.
Other than fabrication techniques, the amount of biopolymer added during fabrication also affects the scaffold properties. By varying the amount of potato starch as the biopolymer in scaffold formulation, Ahmed et al. [66] reported that SEM images taken from the fabricated scaffold showed an increase in porosity from 28% to 53% as the starch amount was increased from 10 vol% to 30 vol% (percentage of starch in the composite mixture). An increase in the starch content also changed the pore shape from a spherical-like shape (in low starch content) to an irregular shape (in high starch content). The compressive strength increased along with the increased amount of starch addition of up to 30 vol% but decreased thereafter. The increase in compressive strength relative to the additional amount of starch resulted from the binding effect among starch granules [67]. Further addition of starch beyond 30 vol% had weakened the compressive strength as more voids (due to higher porosity) contributed to reducing porous structure strength.
In addition to starch concentration, the work conducted by Ahmed et al. [67] also revealed the heat treatment effect on starch-loaded HA scaffold. An experiment was conducted whereby the HA was treated at 1100 • C and then compared to the as-received HA sample. It was discovered that with the heat-treated HA, the amount of solid loading when using native corn starch could reach up to 59 vol.% as compared to only 14 vol.% for the non-heat-treated HA. Beyond the limits of solid loading, the produced slurry appeared to have a paste-like consistency. Achieving a higher limit of solid loading would allow for exploring the advantages of higher porosity and mechanical strength from increased starch content. For instance, mechanical analysis in terms of scaffolds' stiffness was performed to examine the structural integrity after 14 weeks implanted in the nude mouse model [68,69]. Typically, the most common mechanical analysis done on bone scaffolds is compressive stress [70,71]. Beh et al. [51] have shown that the compressive strength of porous 3D HAp samples increases in proportion to corn starch.
Sintering temperature may also affect the properties of scaffolds made from calcined HA and potato starch, as observed by Ahmed et al. [57]. It was demonstrated that the increase in sintering temperature had resulted in porosity decrement. For instance, at 30 vol% amount of starch, the resulted porosity was about 57%, 53%, and 50%, corresponding to the sintering temperature of 1250 • C, 1300 • C, and 1350 • C, respectively.
Several starches from different botanical origins were used in the scaffold fabrication, in which NaCl was used as the porogen [72][73][74]. In these studies, a scaffold that possessed high porosity and high water uptake abilities could be achieved by increasing the starch concentration to a certain level. The botanical origins of starch used were "Balik Wangi", a variety of fragrant rice, "Ubi Gadong", or Indian three leaf yam (Dioscorea hispida), and brown rice. Scaffolds were fabricated using solvent casting and particulate leaching technique, and the effects of varying the amount of starch were investigated. Results obtained were in agreement with other earlier works, indicating the increase in porosity as the starch amount increased.
Although the experimental setup was generally similar between the work done by Mohd-Riza et al. [72], Hori et al. [73], and Mohd-Nasir et al. [74], the different botanical origins of starch resulted in different pore sizes, as revealed from their respective SEM images. The scaffolds fabricated using starch from "Balik Wangi" rice, "Ubi Gadong", and brown rice had a pore size in the range of 10-400 µm, 80-600 µm and 138-1010 µm, respectively. Although data on compressive strength were not available in these studies, previous literature suggested that a different pore size range would result in the variation of compressive strength. Therefore, a correct selection of botanical starch origin has the potential to tailor the properties of the scaffold for the intended application. It can be inferred here that in a scaffold with a fixed amount of HA, the amount of the starch content added plays an important role in affecting the performance of the scaffold. On the other hand, manipulating the HA content also significantly altered the mechanical and porosity of bone scaffolds [22,75]. It was suggested by Chen et al. [76] that the diversity in grain size exerts several effects typically on chemical composition and macroporous structures of the biocomposite scaffold. In the bone scaffold itself, the grain size will definitely affect the protein adsorption as the larger grain size will provide an extra protein site that will promote cell adhesion and proliferation [77,78].
The amylose content in starch varied from different botanical origins. It was reported in the literature that starch with high amylose content might improve properties, such as tensile strength, elongation, and impact strength [79]. Koski et al. [54] studied the effect of amylose content on the mechanical properties of starch/HA bone scaffolds. The comparison was performed on the total amylose content in corn, potato and cassava starch, which showed that compressive strength was increased as the amylose content increased, as the amount had affected the physicochemical and functional properties of the scaffolds, such as the swelling capability and solubility. The amylose content of starch from banana, corn, and potato was reported to be between 17% and 24%, while starch from rice had amylose content between 15% and 35% [80]. The amylose content of sago reported by Misman et al. [81] was approximately 27%. The high amylose content in sago has made it a promising material for the fabrication of the HA-starch-based scaffold. Previous studies on scaffold based on sago starch and hydroxyapatite are limited. The study performed by Mustaffa et al. [82] had used sago and polyvinyl alcohol as a binder in the fabrication of HA and alumina composite. The effect of sintering temperature on the strength of the scaffold was the focus of the study. Here, sago was not treated as one of the main components in scaffold fabrication. Given the high amylose content of sago starch compared to other botanical origins of starch, it is worth exploring the potential of sago starch to produce scaffolds with desirable properties. Unfortunately, the brittle nature of starch alone has limited its application. Further adjustments, such as modification and blending with other polymers, are needed to overcome this issue.
Starch as Particulate Pore Former
Porous ceramics have been widely applied in filtration membranes [83] and catalyst support [84], apart from their application as bone tissue scaffolds for bone ingrowth and drug delivery systems. Porosity and pore interconnectivity is important criteria in bone tissue scaffolds because the interconnection of pores would enhance the nutritional supply. Therefore, this will be adequate for cell survival in the deeper area of the scaffold. This condition is directly affected by the scaffold macropore size, ratio, and morphology. Macropores with 100 µm in diameter size can execute the function of cellular and extracellular components of bone tissue and blood vessels. Meanwhile, pores that are larger than 200 µm in diameter would facilitate osteoconduction [52]. Moreover, the material porosity improves contact between host tissue and ceramic implants, which promotes better interface and reduces movability [85].
Furnishing macro-porosity in ceramic bodies requires the mixing of porogen and pore-forming agents during the manufacturing process. Theoretically, the porogen and pore-forming agents will be discarded through heating and dissolution. Subsequently, this will leave free spaces or voids in ceramic bodies known as pores [86]. Numerous porous ceramic applications crucially require definite control on porosity, pore size, pore shape, and pore space topology. Biological pore-forming agents may be ecologically advantageous and biocompatible. Few starch types were used, covering sizes that range from 5 µm (rice starch) to 50 µm (potato starch) and burning these starches at around 500 • C would create porosities in ceramic bodies [87]. Besides, starch addition in porous ceramic was driven by its gelling ability, mainly as a binder when immersed in water at 60 • C to 80 • C [88]. Xu et al. [89] had employed the corn starch consolidation method in aluminum titanate (Al 2 TiO 5 )-mullite (M) ceramic to exploit the pores in ceramic. Based on the authors' observation, the pore size was bigger as the corn starch percentage increased. Basically, the formation of the pores was due to the volatilization of corn starch. Pore sizes obtained were in the range of 10 µm to 15 µm. Experimentally, the 10% addition of corn starch achieved 54.7% of apparent porosity and 11.5 MPa flexure strength.
In another work, yttrium oxyorthosilicate (Y 2 SiO 5 ) ceramic was introduced with starch to create the porosity in ceramic. An increase in starch addition from 10 wt.% to 40 wt.% notably affected the reduction in ceramic porosity in the range of 70.4% to 38.3%, while the compressive strength ranged from 28.25 MPa to 1.43 MPa [90]. Si-O-C ("black glass") was prepared from the foaming of polysiloxane and starch in other ceramic applications. The addition of starch improved ceramic porosity, whereby porosity obtained was 70% to 90% with compressive strength of 2 MPa to 16 MPa [91].
Similarly, starch as the pore generator was employed in scaffolds as studied by Hadisi et al. [59]. Theoretically, the formation of imine conjugation (Schiff base) between aldehyde from starch and amino groups in chitosan executes porosity in scaffolds. The imine formation displaces the water molecule, and this may increase the porosity and pore size during the freeze-drying process. Calcium phosphate granules were employed in the application of osseous fillers and drug carriers by Marques et al. [92]. In their report, HA and β-tricalcium phosphate (TCP) doped with strontium and magnesium were prepared via the precipitation method. When starch was employed as the pore-forming agent, Ozturk et al. [93] found that the porosity was interconnected in a perfect spherical shape.
Determining scaffold porosity through a conventional method, such as liquid displacement and volume change [93], is a destructive approach. Alternatively, nondestructive testing is mainly directed for hydrophilic-material-based scaffold, and thus porosity characterization of the scaffold can be performed by utilizing microwave measurement. Characterization of the scaffold, such as pore size via SEM, is comparatively costly and does not allow for real-time monitoring of the porous scaffold after implantation. Apart from this, the scaffolds' interconnectivity and their overall porosity are quite impossible to be determined [94]. Due to this reason, Ahn et al. [7] proposed micro-CT analysis to measure the porous structure of the polyetheretherketone for orthopedic implants.
For the past few years, dielectric spectroscopy has been applied upon human and animal tissue as in vitro measurements through dielectric properties determination. For instance, the effect of cross-linking collagen against the dielectric properties was studied by Marzec et al. [95]. The value of dielectric measurement was found to be affected by the changes in collagen structures, mainly due to the release of the water molecule. Microwave is an electromagnetic wave with very short lengths and exceptionally sensitive to the dielectric property of materials. Microwave materials are extensively used in telecommunication, microwave electronics, radar, industrial microwave heating, and aerospace materials. It is important to characterize these materials for absorption, transmission, reflection, dielectric properties, and magnetic properties as a function of frequency. The relative to free space dielectric properties of a material is generally a complex parameter, whereby the real part indicates the material ability to store microwave energy, while the imaginary part indicates the material ability to absorb microwave energy [96].
Several techniques are available to determine the dielectric properties by using microwave measurement, which depends on factors, such as frequency of interest, desired accuracy, material form (either liquid or solid) and whether the sample can be tested under direct contact or otherwise [1]. Techniques for dielectric measurement in the microwave range include a coaxial probe, transmission line, free space, and resonant cavity. The coaxial probe is suitable for the measurement of materials in the form of liquid or semi-solid, and the measurement requires the probe to be in contact with the material. Measurement using transmission line, resonant cavity, and parallel plate imposes restrictions on the sample size and shape. So far, the free space measurement method is the only non-contacting method of all methods mentioned. Hence, it reduces the possibility of damaging the sample and leads to a more accurate dielectric measurement [2]. Figure 3 shows the free space measurement technique, which consists of a vector network analyzer to extract the dielectric properties of specific material. The dielectric measurement was performed using the parallel plate method, which is suitable only for low frequency. Although the contactless measurement approach can be achieved via some modification of the parallel plate method, it is not suitable for measuring the microwave range.
The Effect of Porosity in Ceramic over Microwave Dielectric Measurement
The dielectric measurement was widely applied in ceramic materials, focusing on the dielectric constants and losses. Dielectric loss is regularly characterized by imaginary part to the real part of permittivity and notated by tan δ. Losses are classified into two types, which are intrinsic and extrinsic. The intrinsic losses are dependent on the crystal structure, which is described as an interaction between the phonon system and the electric field. Extrinsic losses are related to imperfection in the crystal structure. These imperfections include impurities, microstructural defects, grain boundaries, porosity, microcracks, and random crystallite orientation [97]. Lanagan et al. [98] examined the effects of porosity and microstructure on dielectric properties upon titanium dioxide (TiO 2 ) in rutile. Dielectric measurement was done in terms of the relative permittivity (ε r ), loss tangent (tan δ), and temperature coefficient of resonant frequency (TCF). By focusing on the porosity, it was observed that the tan δ was greatly influenced by pore volume, while ε r was less significant towards porosity.
A study by Zhao et al. [99] approached the dielectric measurement of boron nitride/silicon nitride (BN/Si 3 N 4 ) ceramic by adding Y 2 O 3 -MgO 2 additive powder to manipulate the porosity in ceramics. Introducing Y 2 O 3 -MgO 2 additive powder seemed to increase the relative density of (BN/Si 3 N 4 ) ceramic while the apparent porosity of ceramic decreases. Porosity and phase components greatly influenced the dielectric properties of ceramic. The effects were notated by Lichtenecker's mixed logarithmic law. Basically, increment in Y 2 O 3 -MgO 2 additive powder will reduce porosity, and this will consequently raise the dielectric constant (E) and dielectric loss tangent (tan δ).
Pores are subjected to a decrease in E and tan δ. It can be seen that the E and tan δ of BN/Si 3 N 4 ceramic had increased as the porosity decreased [99]. Dielectric measurement based on reflection and transmission was similarly applied in porous yttrium silicate (Y 2 SiO 5 ) ceramics by Zhang et al. [90]. From their experiment, the ceramics were fabricated through the freeze casting method. The increase in solid content of the ceramics was from 15 vol.% to 30 vol.%, which decreased the porosity and pore channel size.
There have been few studies reported on the dielectric measurement of scaffolds [3,6,11,12]. In Lang et al. [12], the properties of chitosan/nano-hydroxyapatite composite were investigated using dielectric constant measurements in the frequency range of between 40 MHz and 110 MHz. It was revealed that the dielectric constant increased with the increasing concentration of nanoparticles. However, most of these works employed non-contactless measuring methods, such as a resonant cavity, transmission line using waveguide, dielectric probe, and parallel plate method.
Dielectric properties studies were further expanded on various starches, including tapioca, corn, wheat, rice, waxy maize, and Basmati rice [100,101]. As such, the quantification of dielectric properties via free-space measurement on the starch/HA scaffold in bone tissue engineering is a new area to explore. The measurement of bone tissue scaffolds' porosity based on dielectric spectroscopy is still at a novel stage. This new research direction is currently undertaken by Razali et al. [102], Beh et al. [103], Roslan et al. [55] and Mohd Nasir et al. [104], concentrating on starch/HA scaffolds. Their research was focused on the correlation between the dielectric properties of bone scaffolds against the porosity, while other researchers delved more into the material dielectric properties. This new nondestructive alternative method seemingly offers a new approach in measuring the porosity of scaffold compared to the conventional method, such as the liquid displacement method. The application of dielectric measurement to determine the porosity of bone scaffolds is sensible for hydrophilic-material-based scaffolds. This is because the porosity measurement through the liquid displacement method that uses solvents, such as distilled water, will cause these types of scaffolds to rupture and swell, making such measurement difficult.
Studies by Razali et al. [22], Beh et al. [103], Roslan et al. [72] and Mohd Nasir et al. [74] involved the measurement of dielectric constant (ε ) and dielectric loss (ε ) of the starch/HA bone tissue scaffolds by using transmission line method at frequencies that ranged from 12.4 GHz to 18 GHz [55,104]. Here, the dielectric spectroscopy measurement applies to any porous composite scaffolds as the porous architecture could be quantified by respective ε and ε value. The corn starch/HA scaffolds exhibited low ε and negative ε , which were influenced by the composites porous morphology and their crystalline features due to the various proportion of HA and corn starch applied [50,102,103]. This similar trend could also be seen in tapioca starch/HA scaffolds [104]. However, not all starch/HA composites would exhibit similar dielectric properties to be proportionate to the amount of starch added to the HA, as expected. Roslan et al. [55] found that the size and distribution of micropores of the scaffolds did not correspond to the increment of Bario rice starch added to the HA composites. Therefore, this phenomenon has verified the relation between physicochemical and dielectric properties of the porous composite. Thus, this discovery may initiate the basis of the nondestructive microwave evaluation test for porous composites.
Conclusions
The factors that improve the properties of a scaffold, particularly in terms of its structure, including using a larger amount of starch, sintering at a lower temperature, and using heat-treated hydroxyapatite. The use of starch with high amylose content could be the key for higher quality scaffolds produced from HA-starch composites. Additionally, the percentage of porosity and pore sizes of a scaffold to date are usually characterized by using costly, non-contactless, and destructive methods. Therefore, an alternative for scaffold characterization can be performed via microwave measurement to determine the dielectric properties of a particular scaffold. Furthermore, the correlation between dielectric properties and structural properties could be used as the initial work for future biomaterial-based scaffold characterization, perhaps by including the material mechanical properties and biocompatibility.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2007-06-15T00:00:00.000
|
32566478
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "432963076caeb98cd204a0bd49a1272c100c63cf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46513",
"s2fieldsofstudy": [
"Business",
"Medicine"
],
"sha1": "432963076caeb98cd204a0bd49a1272c100c63cf",
"year": 2007
}
|
pes2o/s2orc
|
Community-based Participatory Research: Necessary Next Steps
Community-based participatory research (CBPR) is gaining increasing credence among public health researchers and practitioners. However, there is no standardization in assessing the quality of research methods, the effectiveness of the interventions, and the reporting requirements in the literature. The absence of standardization precludes meaningful comparisons of CBPR studies. Several authors have proposed a broad set of competencies required for CBPR research for both individuals and organizations, but the discussion remains fragmented. The Prevention Research Centers (PRC) Program recently began a qualitative assessment of its national efforts, including an evaluation of how PRCs implement CBPR studies. Topics of interest include types of community partnerships; community capacity for research, evaluation, and training; and factors that help and hinder partner relationships. The assessment will likely contribute to the development of a standard set of competencies and resources required for effective CBPR.
Introduction
Community-based participatory research (CBPR) has captured the interest of public health researchers and communities alike, because it promises to generate health-enhancing programs well positioned for ready adoption by communities. The seminal work of Kurt Lewin (1) and Paul Freire (2) -to name just two early researchers -dates back to the 1930s and emphasizes an iterative process of action, reflection, and experiential learning. This process is essentially the foundation of CBPR as it is practiced today. Ten years ago, the Institute of Medicine (IOM) recommended CBPR as one of eight new areas in public health education (3). Despite that recommendation, it is unclear how widespread CBPR implementation is within schools of public health. In addition, the CBPR field lacks accepted research designs and outcome measures to determine the effectiveness of the approach. Few established guidelines enumerate the core competencies for organizations and individuals to successfully conduct CBPR.
The Prevention Research Centers (PRC) Program is a large extramural research initiative at the Centers for Disease Control and Prevention. Congress authorized the PRC Program in 1984 to conduct applied public health research, and the first three PRCs were funded in 1986. Currently, 33 PRCs are located in schools of public health or schools of medicine with an accredited preventive medicine residency program. This network of academic research centers collaborates with public health agencies and community members to conduct applied research in disease prevention and control, generally in underserved communities.
In 1997, the IOM conducted a review of the PRC Program and identified areas of strength and areas needing improvement (4). One area for improvement reflected the emerging recognition that the community is an important factor in the health of individuals. The IOM review indicated that "PRCs could serve as leaders in building partnerships, if they are able to progress to a second phase that involves research and dissemination projects that are jointly planned and produced with community partners who have joint ownership of the programs" (4). While many PRCs partnered with their communities before the 1997 IOM report, it was then that the PRC Program formally integrated CBPR into its prevention research framework.
Defining CBPR
Among the terms used to describe CBPR and its analogues are community action research, participatory action research, community-based action research, participatory rapid appraisal, and empowerment evaluation. Minkler (5) described CBPR as "a process that involves community members or recipients of interventions in all phases of the research process." Green and Mercer (6) defined CBPR as "a systematic inquiry, with the collaboration of those affected by the issue being studied, for purposes of education and taking action or effecting change." Sometimes the term is applied to communitybased participatory efforts to implement health enhancement programs that do not include research components at all (7).
The W.K. Kellogg Foundation Community Health Scholars Program defines CBPR as follows: [CBPR] is a collaborative approach to research that equitably involves all partners in the research process and recognizes the unique strengths that each brings. CBPR begins with a research topic of importance to the community and has the aim of combining knowledge with action and achieving social change to improve health outcomes and eliminate health disparities (8).
The PRC Program bases its CBPR framework on that definition, and in its 2003 request for applications the PRC Program required that applicants 1) establish and maintain a center-level community committee; 2) establish and maintain partnerships with health departments, community groups and agencies, and academic units, and include these partners in center activities; and 3) collaborate with partners on planning and implementing the core research.
Characteristics of CBPR
CBPR is an orientation to research that alters the relationship between the researchers and the research participants. In traditional research, academicians define the research issues, determine how research is done, and decide how outcomes are used. University-based departments and professional schools are generally the arbiters of who has the appropriate knowledge to define research and who is qualified to perform it. In contrast, CBPR is predicated on mutual ownership of the research process and products as well as shared decision making (9).
Translating research findings into practice is always a desired outcome, yet the rate of translation has been "inefficient and disappointing" in traditional research (10). In contrast, CBPR methodology theoretically increases the likelihood that research findings will be readily implemented in communities, because communities are invested in the preliminary testing during the research process. Because CBPR is iterative, the research process can build strong and long-lasting partnerships between researchers and research participants (11). Indeed, CBPR relies on durable partnerships that take substantial investments of time and resources to develop and sustain (11).
Efforts to summarize CBPR activity have demonstrated striking variations in methodology. The Journal of General Internal Medicine's special supplement on CBPR in July 2003 included 11 original research papers that demonstrate "how broadly CBPR is being applied, geographically, within specific population groups and clinical scenarios, and methodologically" (12). For example, the settings ranged from rural to urban; the scope of research included randomized controlled trials, intervention studies with prestudy and poststudy comparisons, survey research, and qualitative methodology; and the clinical scenarios ranged from chronic disease management to cancer prevention (12).
Similarly, in a review of 185 articles of CBPR, Viswanathan et al (13) found broad variation in methods, results, and quality of research. The studies involved variable degrees of community participation, from research idea generation to project-specific advisory roles, as well as differences in other characteristics, such as outcome measures, definitions of success, and rigor of research methodology -from randomized, controlled trials to nonintervention studies. The authors noted that the nonexperimental design of most CBPR studies impedes the generalizability of findings (13).
Proposed measures of success in CBPR have included completion of a research component, increased community capacity to address the problem, successful partnership, and sustainability of the project. Viswanathan et al recommend that CBPR projects be assessed on the degree of "colearning" by both researchers and community collaborators (13). O'Toole et al have lamented the lack of high-quality reports for CBPR studies and suggest that a common language for reporting findings would be helpful (12).
These analyses reflect the status of CBPR and highlight gaps in the field. Deficits that have emerged include the lack of 1) common terminology, outcome measures, and an evaluation framework, which are necessary to compare CBPR studies, and 2) a structured and systematic list of essential competencies for CBPR at both the individual and organizational levels.
Competencies for CBPR
Several authors and institutions have proposed a broad set of competencies necessary for CBPR researchers. The Kellogg Community Health Scholars program lists items such as understanding the mission and the values of CBPR; knowing theoretical frameworks, models, and methods of planning, implementation, and evaluation of CBPR; and being able to translate the process and findings of CBPR into policy (14).
Whitmore et al (15) list several questions that organizations and individual researchers should consider before embarking on a CBPR project, including whether the research team has the necessary skills to conduct the project. Also important is whether the institution has the requisite resources and infrastructure to engage in this type of research. Standardization of core competencies would allow organizations to evaluate how well their skills and resources would match with this methodology and would advance the field of CBPR.
Israel et al (16) have proposed a list of training and experience as well as personal qualities required to be a CBPR researcher -for example, ability to be self-reflective and admit mistakes, capacity to work within different power structures, and humility. Seifer et al (17) empha-size the need for interpersonal and facilitation skills, sensitivity to community needs, good communication skills, technical skills (such as grant writing and program evaluation), connections to the community, and commitment to the partnership process. Despite the guidance offered by these resources, the lists are neither comprehensive nor uniform (17).
Even less guidance is available on the institutional capabilities necessary to support and sustain CBPR. Few experts provide details on the time, energy, resources, funding mechanisms, tenure structures, organizational hierarchy, research focus, power-sharing arrangements, and institutional commitment required to conduct CBPR and maintain successful partnerships with communities. Practitioners of CBPR have addressed some of these points, but the discussion remains fragmented (18,19).
Qualitative Assessment of the PRC Program
To provide a better understanding of partnerships, organizational factors, and the value added by CBPR, the PRC Program launched a qualitative assessment of its national efforts in the fall of 2006. One aspect of the assessment was to describe the implementation of CBPR since 2003 and answer the question, "How do PRC researchers and their communities interact to develop, implement, evaluate, and disseminate the core prevention research project?" Three key topics were explored: 1) types of community partnerships and levels of involvement, including the capacity of community committees for research, evaluation, and training; 2) types of participation in PRC research by community committee members and key partners, including factors that help and hinder partner relationships; and 3) perceived benefits of being in the PRC network as viewed by community members. Two additional questions, one related to organizational factors and one related to training, technical assistance, and mentoring, cover topics for understanding the PRCs' approaches to CBPR.
Data collection took place from January 2007 through June 2007 and included 1-hour interviews with PRC directors and principal investigators, training coordinators, and community committee chairs. For each topic area, data were collected from a carefully chosen sample of PRCs, which helped ensure that a range of CBPR approaches were covered.
The qualitative assessment will provide a wide range of descriptive information on PRC partnerships and CBPR approaches and strategies, including the number and types of community committees, the development and evolution of community partnerships, the involvement of partners and community members in core prevention research projects, and methods used to ensure that partners and community members have input into core research. The assessment will also provide models that can be used for partner and community involvement in research; university support for community-based work; and training, technical assistance, and mentoring activities. Future studies need to determine the characteristics and capacities of researchers, academic institutions, and community organizations involved in successful CBPR projects. This information will make it easier to develop a comprehensive inventory of competencies required to conduct CBPR.
Conclusion
CBPR has been referred to as "research plus" (12) in that it not only increases the knowledge base for public health but also promises to identify interventions that are ready for dissemination and are sustainable because they have been developed with community engagement. A review of the quantity and quality of the CBPR literature reveals a picture as varied as the projects, the researchers, and the communities involved. Such extreme variation in methods and quality does not generate a useful body of knowledge. It is thus timely and imperative to delineate a core set of skills and expertise required to be a CBPR researcher and describe the essential resources and organizational infrastructure needed to successfully support CBPR. Standardizing the evaluation measures will enhance the scientific rigor of the research methods employed and improve the field's ability to study, understand, and rectify complex community health problems. The qualitative assessment of CBPR projects within the PRC Program has the potential to accelerate this process. Once an agreed-upon set of competencies and resources is established, assessment of CBPR itself can begin.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2013-05-13T00:00:00.000
|
17927236
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/crigm/2013/496419.pdf",
"pdf_hash": "12effaa5a5c643ba1e5154691242fd820cbfb936",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46514",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f7ed69e2cd3dae4bf86fad5eef925ec31ab0d79d",
"year": 2013
}
|
pes2o/s2orc
|
Peritoneal Lipomatosis: A Case Report of a 12-Year-Old Boy
Peritoneal lipomatosis is a rare disease in childhood with only two cases previously described in children. We report a further case of a 12-year-old boy diagnosed with peritoneal lipomatosis. His main symptoms were abdominal pain, alternating bowel habit, abdominal distension, and melaena. His diagnostic work up included an abdominal MRI, wireless capsule endoscopy and single-balloon enteroscopy. Peritoneal lipomatosis although rare can be diagnosed in childhood. It is a benign clinical entity with variable manifestations.
Introduction
Lipomatosis of the peritoneum is a rare disorder with less than 50 cases published to date [1]. There are only two previously reported cases in childhood. We describe a further case of a 12-year-old boy diagnosed with peritoneal lipomatosis. We report the findings of MRI and endoscopic assessment by wireless capsule endoscopy (WCE) and Single-balloon Enteroscopy (SBE).
Case Report
A two-year-old boy initially presented to his local hospital prominent gastrocolic reflex, bloating, and intermittent changing shape of his abdomen. At the age of 4.5 years he had an abdominal U/S due to ongoing symptoms, which was suggestive of peritoneal lipomatosis. As he continued to complain about abdominal pain, urgency and alternating bowel habit he was referred for further evaluation to our specialist centre at 12 years of age. At the time other symptoms were noted, including intermittent abdominal distention and occasional blood in the stools. Radiological imaging was initially done by MRI scan which revealed widespread peritoneal lipomatosis, encasing the intraperitoneal contents with the lipomatous tissue lying anterior to the liver and stomach. It extended between the right lobe of liver and right kidney and down to the peritoneal cavity into the pelvis.
The lipomatosis displaced the bowel loops centrally. The liver spleen, kidneys, and pancreas appeared normal (Figures 1 and 2). The WCE (Pillcam SB, Given Imaging, Yoqneam, Israel) did not reveal any significant abnormalities apart from ill-defined yellowish, round, lesions ( Figure 3). As the boys symptoms did not settle he underwent a SBE (Olympus, Tokyo, Japan). Multiple mucosal biopsies were taken which showed normal mucosa from the duodenum to the distal jejunum/proximal ileum. No specific therapy was given, and his symptoms improved spontaneously except for very mild nonspecific abdominal pain.
Discussion
Lipomatosis is a distinct clinicopathologic entity characterized. By the development of nonencapsulated lipomas in subcutaneous tissues [2]. Lipomas are well-defined, noninvasive, benign, and encapsulated tumours with a composition similar to that of normal adipose tissue [3]. In generalised lipomatosis, there are masses of diffuse infiltrating lipomatosis resembling simple lipomas except for their extensive infiltrative distribution [3]. Involvement of the face, neck, extremities, trunk, abdomen, and the pelvis has been reported [2].
Pelvic, abdominal, and intestinal lipomatosis are distinct clinical entities. They are characterised by abdominal 2 Case Reports in Gastrointestinal Medicine distension as a consequence of intraperitoneal and retroperitoneal fat and respiratory distress due to mediastinal airway compression [2]. Pelvic lipomatosis may present with bladder dysfunction, constipation, nonspecific abdominal discomfort, oedema of the lower extremities, and ureteral obstruction leading to hydronephrosis and renal failure [4]. So far two cases of abdominal lipomatosis have been reported in children. The first case described a five-year old boy who presented with periumbilical nonradiating abdominal pain, abdominal distension and umbilical herniation [3]. The second case was an eight-year old child diagnosed with diffuse lipomatosis including intraperitoneal, retroperitoneal, and abdominal wall involvement [5]. To the best of our knowledge this is the third reported case in the literature of peritoneal lipomatosis with the most extensive series of investigations to evaluate this condition so far.
Our patient is the first reported case of peritoneal lipomatosis investigated by panenteroscopy using WCE and enteroscopy. The patient suffered from altered stool habit, occasional blood in stools, pain, and abdominal distension. Despite panendoscopic examination and pangastrointestinal tract biopsy assessment, no mucosal pathology could be identified, and despite extensive radiological and blood work up his symptoms could not be explained. Therefore, by the elimination of all other causes, we attributed the symptoms to direct mass effects of the lipomatosis. Peritoneal lipomatosis would be responsible for the mechanical symptoms of distension, causing obstruction intermittently and altering bowel transit time. Nevertheless, this would not explain the symptom of blood, albeit a very occasional symptom at best reported by the family. This we believe is the first WCE description of this condition with the addition of the rarely performed SBE that examined the entire small bowel of this child along with histological specimens excluding infiltration of the intestinal wall by the lesions.
Our case should not be misinterpreted with intestinal lipomatosis, which is the presence of numerous circumscribed lipomas of the intestine [6]. Less than 50 cases have been described worldwide in the literature with a wide span of presenting ages varying between 20 and 88 years [1]. There is only one reported case of a child. The patient was a 10 year old girl who presented with multiple jejunal and ileal lipomas. Her only presenting symptom was abdominal pain and the diagnosis was made with a computed tomography of the abdomen [7].
|
v3-fos-license
|
2016-10-25T01:07:10.231Z
|
2016-03-01T00:00:00.000
|
11522909
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.nbd.2015.12.001",
"pdf_hash": "60ca3e3fa4b0069016e642e238222272ddcb5c6f",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46516",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "323e80b609f98d379fe21ac7949d8f08ac2d7c24",
"year": 2016
}
|
pes2o/s2orc
|
Inhaled 45–50% argon augments hypothermic brain protection in a piglet model of perinatal asphyxia
Cooling to 33.5 °C in babies with neonatal encephalopathy significantly reduces death and disability, however additional therapies are needed to maximize brain protection. Following hypoxia–ischemia we assessed whether inhaled 45–50% Argon from 2–26 h augmented hypothermia neuroprotection in a neonatal piglet model, using MRS and aEEG, which predict outcome in babies with neonatal encephalopathy, and immunohistochemistry. Following cerebral hypoxia–ischemia, 20 Newborn male Large White piglets < 40 h were randomized to: (i) Cooling (33 °C) from 2–26 h (n = 10); or (ii) Cooling and inhaled 45–50% Argon (Cooling + Argon) from 2–26 h (n = 8). Whole-brain phosphorus-31 and regional proton MRS were acquired at baseline, 24 and 48 h after hypoxia–ischemia. EEG was monitored. At 48 h after hypoxia–ischemia, cell death (TUNEL) was evaluated over 7 brain regions. There were no differences in body weight, duration of hypoxia–ischemia or insult severity; throughout the study there were no differences in heart rate, arterial blood pressure, blood biochemistry and inotrope support. Two piglets in the Cooling + Argon group were excluded. Comparing Cooling + Argon with Cooling there was preservation of whole-brain MRS ATP and PCr/Pi at 48 h after hypoxia–ischemia (p < 0.001 for both) and lower 1H MRS lactate/N acetyl aspartate in white (p = 0.03 and 0.04) but not gray matter at 24 and 48 h. EEG background recovery was faster (p < 0.01) with Cooling + Argon. An overall difference between average cell-death of Cooling versus Cooling + Argon was observed (p < 0.01); estimated cells per mm2 were 23.9 points lower (95% C.I. 7.3–40.5) for the Cooling + Argon versus Cooling. Inhaled 45–50% Argon from 2–26 h augmented hypothermic protection at 48 h after hypoxia–ischemia shown by improved brain energy metabolism on MRS, faster EEG recovery and reduced cell death on TUNEL. Argon may provide a cheap and practical therapy to augment cooling for neonatal encephalopathy.
Introduction
Neonatal Encephalopathy (NE) consequent on perinatal hypoxia-ischemia is the third leading cause of child death and one of the main causes of preventable child neurodisability worldwide (Lawn et al., 2014).In the developed world, cooling to 33-34 °C for 72 h in moderate to severe NE increases the rate of survival without impairments in childhood to 15%, but despite cooling, around 25% infants die and 20% survivors have sensorimotor or cognitive impairments (Azzopardi et al., 2014).Attempts to increase brain protection with deeper and longer cooling (Alonso-Alconada et al., 2015;Shankaran et al., 2014) suggest that current clinical cooling protocols are optimal and that other therapies that can augment hypothermic neuroprotection in NE are needed (Robertson et al., 2012).
In a comparative review of potential neuroprotective agents, the noble gas Xenon was rated in the top six, however there was concern over its cost and the requirement for specialized equipment for delivery E-mail address: n.robertson@ucl.ac.uk (N.J. Robertson). 1 These authors contributed equally to the work.and scavenging (Robertson et al., 2012).Xenon has shown neuroprotective properties in adult and neonatal (Chakkarapani et al., 2010;Faulkner et al., 2011;Ma et al., 2005) models of hypoxia-ischemia; this neuroprotection is stronger in neonatal models when Xenon is combined with cooling.In neonatal rat (Ma et al., 2005) and piglet (Chakkarapani et al., 2010;Faulkner et al., 2011) studies, the combination of Xenon with cooling provided neuroprotection while neither intervention alone was as effective.Current interest is turning towards Argon, which is the most abundant inert gas already widely used in industries and available at a cost 200 times lower than Xenon.Argon does not produce demonstrable anesthetic effects at atmospheric pressure and provides potent neuroprotection, at least equivalent to Xenon, in animal models of hypoxic-ischemic brain injury and in vitro using murine organotypic hippocampal slice cultures (Loetscher et al., 2009) and neuronal cultures (Jawad et al., 2009).In some studies Argon is superior to Xenon for organ protection from ischemia-reperfusion injury (Irani et al., 2011).In vitro models of cerebral ischemia and traumatic brain injury suggest that the optimum concentration of Argon for protection is 50% and the therapeutic window lasts up to 3 h (Loetscher et al., 2009).Argon 50% administered one hour after transient middle cerebral artery occlusion (MCAO) in adult rats provided significantly reduced infarct volumes and composite adverse outcomes (Ryang et al., 2011).Protection has also been observed in neonatal rodent models where 70% argon at 2 h after hypoxia-ischemia improved cell survival to naive levels and reduced infarct volume (Zhuang et al., 2012).
We hypothesized that Argon-augmented cooling would lead to better brain protection than cooling alone after a hypoxic-ischemic insult.Our aim was to assess whether 24 h of 45-50% Argon started 2 h after hypoxia-ischemia augments hypothermic neuroprotection in a piglet perinatal asphyxia model.This model replicates neonatal intensive care with meticulous monitoring and control of physiological and metabolic parameters.This model also has strong similarities to newborn infants with NE in terms of the timing of the evolution of injury after hypoxia-ischemia (Azzopardi et al., 1989;Lorek et al., 1994), pattern of injury, neuropathology and cerebral magnetic resonance spectroscopy (MRS) (Thayyil et al., 2010).The efficacy of Argon protection was assessed using: (i) Cerebral MRS biomarkers, proton ( 1 H) MRS lactate/ N acetyl aspartate (NAA) (Thayyil et al., 2010) and phosphorus-31 ( 31 P) MRS for phosphocreatine/inorganic phosphate (PCr/Pi) and NTP/ exchangeable phosphate pool (epp) (Azzopardi et al., 1989); (ii) aEEG background activity recovery over 48 h, a strong predictor of outcome in babies with NE (van Rooij et al., 2005); and (iii) Histological assessment of cell death using TUNEL at 48 h after hypoxia-ischemia.
Sample size calculation
Our primary outcomes were cerebral lactate/NAA and NTP/epp.Previous work with our model suggested that the change in lactate/NAA during 48 h varied between normo-and hypothermic groups by 1.0 U, with a standard deviation of 0.65 U (log scale).Assuming similar magnitude of additional effect for Argon-augmented cooling following HI versus cooling alone and similar variability at 48 h and with 5% significance and 80% power, at least eight subjects were required in each group based on a two-sample t-test sample size calculation.
Animal experiments and surgical preparation
All animal experiments were approved by the Ethics Committee of UCL and performed according to the UK Home Office Guidelines [Animals (Scientific procedures) Act, 1986].The study complies with the ARRIVE guidelines.Twenty male piglets, aged less than 40 h, with weights 1.8-2.1 kg were anesthetized and surgically prepared as described previously (Lorek et al., 1994).The study time-line is shown in Fig. 1.Anesthesia was induced by 4% v/v isoflurane through a facemask for around 5 min to facilitate tracheostomy and intubation.Throughout the surgery, isoflurane was maintained at 2.8-3% guided by peripheral oxygen saturation monitoring (Nonin Medical, Plymouth, MN, USA) and the animal's response to stimulation.Following tracheostomy, a suitable size of endotracheal tube (Smiths Medical, Ashford, Kent, UK) was fixed and the piglet was mechanically ventilated (SLE 2000 infant ventilator, Surrey, UK).Ventilator settings were adjusted to maintain partial pressure of oxygen (PaO 2 ) at 8-13 kPa and carbon dioxide (PaCO 2 ) at 4.5-6.5 kPa, allowing for temperature and fraction of inspired oxygen (FiO 2 ) correction of the arterial blood sample.
After the airway was secured, both common carotid arteries were surgically isolated at the level of the fourth cervical vertebra and a vascular occluder (OC2A, In Vivo Metric, Healdsburg, CA, USA) was placed on each side.After completion of surgery, inspired isoflurane concentration was maintained at 2% v/v.
An umbilical venous catheter was inserted for infusion of maintenance fluids (10% dextrose, 60 ml/kg/day before the insult and 40 ml/ kg/day after resuscitation), fentanyl (5 μg/kg/h), and antibiotics (benzyl penicillin 50 mg/kg, every 12 h and gentamicin 4 mg/kg, once a day).An umbilical arterial catheter was inserted for invasive physiologic monitoring (SA instruments) for heart rate and arterial blood pressure, and blood sampling for arterial gases and electrolytes (Abbot Laboratories, UK). Hepsal (0.5 IU/ml of heparin in 0.9% saline solution) was infused at rate of 0.3 ml/h to prevent umbilical arterial catheter blockage.
MR methods
The head was immobilized in a stereotactic frame for MRS acquisition.Piglets were positioned within the bore of 9.4 Tesla Agilent MR scanner. 1 H and 31 P MRS data were acquired at baseline and at 24 and 48 h after cerebral hypoxic-ischemia.
31 P MRS
A 7 cm × 5 cm elliptical transmit-receive MRS surface coil tuned to the 31 P resonant frequency was positioned on top of the head. 31P MRS was acquired with 1-minute resolution using a non-localized single-pulse acquisition.MRS data were analyzed using the Advanced Method for Accurate, Robust and Efficient Spectral fitting of MRS data with use of prior knowledge (AMARES) (Vanhamme et al., 1997) as implemented in the jMRUI software.Prior knowledge of NTP multiplet structure was used.NTP is predominately ATP and the latter contributes approximately 70% of the NTP signal (Mandel and Edel-Harth, 1966).Thus NTP changes during this experiment predominately reflected ATP changes.Pi was fitted using 4 separate components and PCr with a single component.The following peak-area ratios were calculated: Pi/epp, PCr/epp, and NTP/epp where epp = exchangeable phosphate pool = Pi + PCr + 2γ-NTP + β-NTP.
1 H MRS
1 H MRS data were collected from voxels located in the dorsal right subcortical white matter at the centrum semiovale level (white matter voxel, 8 × 8 × 15 mm) and in the deep gray matter centered on both lateral thalami (deep gray matter voxel, 15 × 15 × 10 mm) using a combination of a 65 × 55 mm elliptical receive surface coil, a 150 mm diameter transmit volume coil and a LASER acquisition (TR = 5000 ms, TE = 288 ms, 128 averages).Spectra were analyzed using AMARES as implemented in the jMRUI software and the lactate/NAA peak area ratio was calculated.
Cerebral hypoxia-ischemia (HI)
31 P MRS data were collected at baseline, during hypoxia-ischemia and for one hour after cessation of hypoxia-ischemia.Hypoxia-ischemia was induced inside the MR scanner by remotely inflating the vascular occluders around both common-carotid arteries, and simultaneously reducing FiO 2 to 6% (vol/vol).During hypoxia-ischemia the β-NTP peak height was continuously monitored using in-house Matlab (Mathworks) software.At the point at which β-NTP had fallen to 50% of its baseline value, FiO 2 was increased to 9%.When β-NTP fell to 40% baseline height the inspired oxygen fraction was titrated to keep the β-NTP peak height between 30% and 40% of its original height for a period of 12.5 min.At the end of hypoxia-ischemia the carotid arteries were de-occluded and the FiO 2 returned to 21%.Insult severity was calculated (Faulkner et al., 2011).
Experimental groups
Following resuscitation, while in the bore of the MR system, piglets were randomized (computer generated randomization revealed after HI) into 2 groups -Cooling or Cooling + Argon (Fig. 1).Both groups were cared for over 48 h after hypoxia-ischemia and maintained hypothermic (core temperature 33.5 °C) between 2-26 h.Physiological parameters were compared between groups with Mann-Whitney at each time point.
Argon delivery
In those randomized to Cooling + Argon, 45-50% Argon was delivered through the ventilator from 2-26 h (Fig. 1).Argon Gas was obtained from Air Liquide Ltd. (Manchester UK).Piglets were ventilated via an SLE 2000 infant ventilator (SLE Ltd., Surrey, UK), which has both oxygen and air supply inlets with an oxygen blender.During Argon treatment, the air supply was switched with an Argon cylinder connected inline with the ventilator (Fig. 2).When FiO 2 was increased to maintain piglet peripheral oxygen saturations and PaO 2 within normal parameters, the ventilator blender increased the oxygen delivery and decreased delivery from the Argon cylinder, resulting in a small reduction in inspired Argon and Nitrogen concentrations.No piglet required more than 30% FiO 2 therefore all piglets in the Cooling + Argon group received a minimum of 45% Argon throughout treatment.
Fig. 1.Study time-line.Following baseline data acquisition, piglets underwent cerebral hypoxia-ischemia.At the end of hypoxia-ischemia (time 0), piglets were randomized to (i) Cooling (33.5 °C) for 24 h or (ii) Cooling + 50% Argon for 24 h.Treatment was started at 2 h after Time 0. Piglets were maintained under meticulous intensive care for 48 h following HI, prior to euthanasia.MRS was acquired at baseline, during HI, for the first 60 min after HI, at 24 and 48 h.EEG was acquired at baseline and in between the MRS acquisitions.Fig. 2. Argon delivery.Piglets were ventilated via an SLE2000 infant ventilator (SLE Ltd., Surry, UK) which has both oxygen and air supply inlets with an oxygen blender.Each cylinder contained 4800 l of compressed gas comprising 50% argon, 21% oxygen and 29% nitrogen.During Argon treatment, the air supply was switched with an Argon cylinder connected inline with the ventilator.When FiO 2 was increased to maintain piglet peripheral oxygen saturations and PaO 2 within normal parameters, the ventilator blender increased the oxygen delivery and decreased the delivery from the Argon cylinder, resulting in a reduction in inspired Argon and Nitrogen concentrations as shown.No piglet required more than 30% FiO 2 therefore all piglets in the Argon + Cooling group received a minimum of 45% Argon throughout treatment.
aEEG/EEG acquisition
After surgical preparation, multichannel six-lead EEG monitoring (Nicolet, Care Fusion, Wisconsin, USA) was acquired at baseline and throughout the periods between MRS data acquisitions.Filtered amplitude-integrated EEG recordings were classified according to the pattern classification (Hellström-Westas et al., 1995).A score of 0 was flat trace; 1, continuous low voltage; 2, burst suppression; 3, discontinuous normal voltage; and 4, continuous normal voltage, at baseline and then every hour after hypoxia-ischemia.
Brain histology
At 48 h after hypoxia-ischemia, piglets were euthanized with pentobarbital, the brain was fixed by cardiac perfusion with cold 4% paraformaldehyde, dissected out and post-fixed at 4 °C in 2% paraformaldehyde for 7 days.Coronal slices (5 mm thick) of the right hemisphere, starting from anterior to the optic chiasma, were embedded in paraffin, sectioned to 8-μm thickness and stained with hematoxylin and eosin to validate the bregma for analysis.For each animal, 2 sections (bregma 00 and − 2.0) were stained and 7 different regions in the brain were examined (Fig. 3).
To assess cell death, brain sections were stained for nuclear DNA fragmentation using histochemistry with transferase mediated biotinylated d-UTP nick end-labeling (TUNEL) as previously described (Robertson et al., 2013).Briefly, TUNEL sections were pretreated in 3% hydrogen peroxide, subjected to a protease K pre-digestion (Promega, Southampton, UK) and incubated with TUNEL solution (Roche, Burgess Hill, UK).TUNEL was visualized using avidin-biotinylated horseradish complex (ABC, Vector Laboratories, Peterborough, UK) and diaminobenzidine/H 2 O 2 (DAB, Sigma, Poole, UK) enhanced with CoSO 4 and NiCl 2 .TUNEL sections were dehydrated and cover-slipped with DPX (VWR, Leighton Buzzard, UK).For each animal and brain region, TUNEL-positive nuclei were counted at two levels and from 7 regions (Fig. 3) by an investigator blind to the treatment group and the average converted into counts per mm 2 .
2.9.Statistical methods 2.9.1.MRS All analyses were performed using the SAS JMP® v11.0.0 software.A statistical model was fitted to the ratios NTP/epp, PCr/Pi and Lac/NAA.An analysis of variance (ANOVA) model was fitted and the differences in the means on the log scale for the two treatment groups (Cooling versus Cooling + Argon) were estimated from the model at each of the three time points with 95% confidence intervals (C.I.s) for the differences.The differences in treatment group means are shown graphically using 95% Least Significant Difference error bars.Corrections for multiple measurements were not made.
EEG
Following the baseline scoring, scores were obtained hourly until 48 h from hypoxia-ischemia.Each subject's scores were averaged over the following periods; 0-6 h, 7-12 h, 13-18 h, 19-24 h, 25-30 h, 31-36 h, 37-42 h and an analysis of variance model fitted to the mean scores.The differences in the means between the two treatment groups (Cooling plus Argon versus Cooling) were estimated from the model at each of the seven timepoints with 95% confidence intervals (C.I.s) for the differences.
TUNEL
An analysis of variance model was fitted to the mean counts to give an estimate of the expected counts per mm 2 .The overall difference between the means for the two treatment groups, and treatment differences across regions are presented with 95% C.I.s and graphically using 95% Least Significant Difference error bars.
Results
There were 10 animals in the Cooling group and 8 animals int he Cooling + Argon group.One piglet (Cooling + Argon) was lost prior to 48 h due to cardiac arrest.Another piglet (Cooling + Argon) was excluded because the cooling mattress malfunctioned and the piglet was normothermic between 7-9 h; in addition a fault was noted in the ventilator and Argon delivery was not assured for a period of several hours.
Physiological data and insult severity
There were no significant intergroup differences between groups in bodyweight, postnatal age and baseline physiological measures apart from the base excess, which was more alkaline in the Cooling + Argon group at baseline (Table 1).There was no difference in the hypoxic-ischemic insult severity between the two groups (Table 1).There was no difference between groups for volume replacement and inotrope use following hypoxia-ischemia (Table 2).
Argon usage
Fifty-four Argon cylinders were used for this study.Each cylinder contained 4800 l of compressed gas comprising 50% argon; 21% oxygen and 29% nitrogen.Each cylinder lasted 5.5 haround 872 l/h or 14.5 l/ min was delivered to the SLE ventilator.The SLE ventilator, however, vents 8-9 l of air/min leaving the gas delivery to the pig of 4-5 l/min.Argon (n = 8).Analysis using Mann-Whitney test indicated that there was no evidence of a difference between the two groups for any of the outcomes at any of the time-points apart from the baseline base excess was more alkaline in the Cooling + Argon group.Insult severity was estimated by calculating the time integral of the change in NTP/epp during HI and the first 60 min of resuscitation.
MRS analysis showed improved
and C.I.s are shown in Table 3. NTP/epp and PCr/Pi means were significantly higher at 48 h in Cooling + Argon compared to the Cooling group (p = 0.01 for both).White matter Lactate/NAA was significantly lower in the Cooling + Argon compared to the cooling group at 24 and 48 h (p = 0.03 and p = 0.04 respectively).Thalamic 1 H MRS showed no differences between groups at any time point.
aEEG recovery was faster in the Cooling + Argon group
The group mean hourly aEEG scores were significantly higher (p b 0.05) in the Cooling + Argon versus Cooling group from 18 h post hypoxia-ischemia onward, indicating faster recovery of brain electrical activity towards normal with Argon-augmented cooling.The overall difference between the means for the two treatment groups, and treatment differences for each time interval are presented with 95% C.I.s (Table 4) and graphically using 95% Least Significant Difference error bars (Fig. 5).
Cooling + Argon decreased TUNEL positive cell death at 48 h
Representative photomicrographs of TUNEL staining in the putamen and periventricular white matter are shown for Cooling and Cooling + Argon are shown in Fig. 6A-D.The 95% C.I.s are shown in Table 5.There was evidence of an overall difference between the means of the Cooling versus Cooling + Argon treatment groups (p = 0.018) with the estimated cells per mm 2 20.9 points lower (95% C.I. 3.7-38.2) for the Cooling + Argon versus Cooling alone.The estimated mean cells per mm 2 for the two treatment groups by region are shown in Fig. 6E.The region showing the largest difference in cell death was the putamen with a mean difference of 61.0 cells per mm 2 (p = 0.01).The caudate showed a mean difference of 42.2 cells per mm 2 (p = 0.07) (Fig. 6F).
Discussion
Compared to cooling alone, we observed improved cerebral protection with a combination of cooling and 45-50% Argon started at 2 h after hypoxia-ischemia and continued for 24 h in our newborn piglet model of perinatal asphyxia.The addition of Argon to cooling increased whole brain ATP and PCr at 48 h ( 31 P MRS NTP/epp and PCr/Pi) and reduced the secondary rise in white but not gray matter Lac/NAA on localized 1 H MRS at 24 and 48 h.Compared to cooling, the combined therapy led to a faster aEEG recovery from 18 h after hypoxia-ischemia and a reduction in cell death on TUNEL staining over seven brain regions combined and in the putamen specifically.
Our MRS biomarkers correlate with injury severity after hypoxia-ischemia in the piglet (Lorek et al., 1994;Penrice et al., 1997) and outcome in infants with NE (Robertson et al., 1999).Higher ATP on 31 P MRS in infants with NE is associated with better long-term outcome in clinical studies (Azzopardi et al., 1989); we saw higher levels of ATP with Argon-augmented cooling compared to cooling alone.High levels of thalamic lactate/NAA on MRS in neonates in the first month after birth are predictive of a poor 12-18 month neurodevelopmental outcome (Robertson et al., 1999;Thayyil et al., 2010); we saw reduced Lac/NAA on white matter MRS with Argon-augmented cooling compared to cooling alone.The aEEG background voltage and rate of aEEG recovery after HI are also predictive of neurodevelopmental outcome in NE (van Rooij et al., 2005) even in babies undergoing therapeutic hypothermia, with a positive predictive value of an abnormal background pattern of 0.82 at 48 h (Csekő et al., 2013).In our study, electrical activity on the aEEG normalized more rapidly with Argon and cooling from 18 h onwards compared to cooling alone.Argon was not associated with any cardiovascular or blood pressure changes during the 24 h delivery period and was well tolerated by all piglets.This is in keeping with the physiological stability seen in a recent piglet safety study of inhaled Argon in concentrations up to 80% (Alderliesten et al., 2014).In our study, Argon was straightforward to deliver through a standard neonatal SLE 2000 ventilator with no requirement for a scavenging system, unlike Xenon (Faulkner et al., 2012).Despite its 200 fold higher cost and scarce supply, Xenon has been more widely studied than Argon and has shown significant brain protection in adult (Schmidt et al., 2005) and neonatal (Ma et al., 2005) pre-clinical studies of hypoxia-ischemia, particularly when combined with cooling (Chakkarapani et al., 2010;Faulkner et al., 2011).Clinical trials of Xenon as an additional treatment to hypothermia for NE are underway in the UK (TOBYXe; NCT00934700 and Cool Xenon study; NCT01545271).Although Xenon is an anesthetic at atmospheric pressure and Argon is not, both noble gases share the important attribute of good blood brain barrier penetration and fast onset, which are vital properties for neuroprotection.
Protective effects of Argon have been seen in in vitro excitotoxic and hypoxic models (Coburn et al., 2008;Jawad et al., 2009;Loetscher et al., 2009).An in vivo adult rodent model of transient middle cerebral artery occlusion (MCAO) exposed to 50% Argon 1 h after hypoxia-ischemia showed a significant overall reduction in infarct volume compared to no Argon; this protection was most marked in the cortex and basal ganglia (Ryang et al., 2011).Another MCAO study of found similar protection in the cortex with 50% Argon, but increased subcortical damage and no improvement in neurological deficit (David et al., 2012).In our Argon piglet study we observed higher levels of ATP across the whole brain on 31 P MRS but more protection in the white matter than gray matter voxel on 1 H MRS. This is supported by the finding of improved electrographic recovery on aEEG, as the white matter voxel captures the dorsal aspect of the brain corresponding to EEG lead placement.MRS voxel position may explain why gray matter protection was not seen; the gray matter MRS voxels were centered on both thalami and so did not sample metabolism in the putamen where strong histological cell protection was seen.As well as significant histological protection in the putamen, there was a trend towards protection in the caudate but not in the thalamus.Histological protection was not seen in the white matter regionsthe internal capsule and periventricular white mater (see Table 5 and Fig. 5); this may relate to the lower levels of cell death with cooling and less opportunity for white matter protection with the addition of Argon.Combining all brain regions there was significant protection with the addition of Argon to cooling.A partial volume effect of the white matter MRS voxel may have resulted in MRS sampling of adjacent brain regions including areas of gray matter.Protection observed with Argon in our current study appears more robust than with Xenon in the same model in 2011 with a similar insult (Faulkner et al., 2011); in this previous study we observed no statistically significant difference between Xenon-augmented cooling and cooling alone, although differences were seen when combined therapy was compared with no therapy.Interestingly, histological protection was also observed with Xenon augmented cooling in the putamen (as seen with Argon) and the cortex, however this protection was only significant when compared to control animals not receiving cooling (Faulkner et al., 2011).
There are several possible mechanisms thought to mediate Argon's brain protection.As Argon (atomic number 18) is smaller than Xenon (atomic number 54), this may change its binding sites.Argon triggers gamma-aminobutyric acid (GABA) neurotransmission by acting at the benzodiazepine binding site of the GABA A receptor (Abraini et al., 2003), however this is seen typically under hyperbaric conditions and the role of this pathway is unclear in the immature brain in which GABA receptor activation is excitatory (Ben-Ari et al., 2012).Nevertheless, the activation of GABA receptors in mature brain has been shown to be protective (Schwartz-Bloom and Sah, 2001).Argon has also been seen to have oxygen-like properties where it increased survival in animals under hypoxic conditions (Soldatov et al., 2008); this may confer mitochondrial protection in the post ischemic period.Anti-apoptotic signaling is an important mechanism of Argon's brain protection.Like Xenon, Argon increases expression of cell survival proteins; those specific to Argon are increased expression of Bcl-2 (Zhuang et al., 2012) and enhancement of the ERK 1/2 activity in astrocytes, neurons and microglia by direct activation of the MEK/ERK 1/2 pathway (Fahlenkamp et al., 2012).Argon reduces heat shock protein expression (Ulbrich et al., 2015).Argon does not appear to influence NMDA receptors or potassium channels (Brücken et al., 2014), which are two important mechanisms of Xenon and hypothermic protection.The additive protection seen with the combination of Argon and cooling suggests that Argon targets complementary cell protection cascades to cooling, unlike Xenon, which targets more similar cascades to cooling.For example, one important action of cooling is to reduce glycine release after hypoxia-ischemia (Illievich et al., 1994).Unlike Xenon, Argon has been shown to have no effect on NMDA receptors at high or low glycine levels (Harris et al., 2013).We did not study the effect of Argonaugmented cooling on neuroinflammation.A recent rodent study The study has some limitations.We observed that the Cooling + Argon group's blood pH was more alkaline at baseline than the Cooling group.It is unclear why this occurred, however, both levels are within the normal range for piglets and all included piglets appeared The overall cell death in 7 regions was reduced in the Argon + Cooling group compared to the Cooling group, with the largest difference seen in the putamen and in the caudate.Bold emphasis indicates statistical significance.healthy, with normal baseline blood lactate levels.Some in vitro studies suggest there may be an additional effect of Argon at higher concentrations than 50% (Ulbrich et al., 2014); we chose 50% as it has shown maximal benefit in some studies (Loetscher et al., 2009) and allows for the inspired oxygen to be increased if required to maintain oxygen saturations.In both Cooling and Argon + Cooling groups we saw higher blood glucose during cooling; this may be due to the cooling itself as cooling has been associated with increased blood glucose variability and greater insulin requirements compared to the post-rewarming normothermic phase (Escolar et al., 1990).Finally, no formal corrections for multiple measurements of MRS were made; the p values are indicators of a signal that is likely to have biological and outcome significance.In summary, this is the first study to show augmentation of hypothermic neuroprotection in a neonatal pre-clinical large animal model of birth asphyxia.45-50% Argon was practical and easy to deliver and was not associated with any physiological differences during the 24 h exposure with cooling.Protection was assessed using biomarkers (MRS and aEEG), which are strongly predictive of outcome in babies with NE.The overall brain cell death was reduced with the addition of 24 h Argon to cooling and this protection was evident with a twohour delay starting therapy.Argon may be an affordable and practical therapy to augment hypothermic brain protection in babies with NE.
PCr/Pi, NTP/epp at 48 h and lower white matter Lac/NAA at 24 and 48 hThe least squares means plots and 95% Least Significant Difference (LSD) bars for the NTP/epp, PCr/Pi and Lac/NAA (on log10 scale) in thalamus and white matter are shown in Fig.4.The differences in the means
Fig. 4 .
Fig. 4. Magnetic resonance spectroscopy of the brain at baseline, 24 and 48 h after hypoxia-ischemia.Least squares means plot with 95% Least Significant Difference (LSD) bars for the NTP/ epp and PCr/Pi in whole-forebrain, and Lac/NAA in thalamus and white matter; non-overlapping bars show evidence of a significant difference.Whole-forebrain NTP/epp (A) and PCr/Pi (B) means were significantly higher in the Cooling + Argon group compared to Cooling at 48 h post-HI.White matter Lac/NAA was significantly lower in the Cooling + Argon group compared to Cooling at both 24 and 48 h (p = 0.03 and p = 0.04 respectively).Thalamic 1 H MRS showed no difference between groups at any time point.*p b 0.05, **p = 0.01.epp = exchangeable phosphate pool; Lac = lactate; NAA = N-acetyl aspartate; Thal = thalamic; WM = white matter; HI = hypoxia-ischemia.
Fig. 5 .
Fig. 5. Amplitude-integrated electroencephalogram (aEEG).The group mean hourly aEEG scores were significantly higher in the Cooling + Argon group versus Cooling alone, from 18 h post-HI onwards, indicating faster recovery of brain electrical activity towards normal.Panel A shows grouped mean hourly aEEG scores per 6-hour period with 95% Least Significant Difference (LSD) where non-overlapping bars show evidence of a significant difference.Representative aEEG traces are shown at 24 h post-HI for the Cooling (B) and Cooling + Argon (C) groups.aEEG = amplitude-integrated EEG; HI = hypoxia-ischemia.*p b 0.05.
Fig. 6 .
Fig. 6.TUNEL histology.Co-treatment with 45-50% Argon decreased TUNEL-positive cell death at 48 h after a hypoxic ischemic insult when compared to cooling alone.Representative sections are shown at ×20 magnification from the same animal in the Cooling (left column, A and C) and Cooling + Argon (right column, B and D) from the putamen (PTMN) and periventricular white matter (pvWM).There was an overall decrease in the estimated mean TUNEL-positive cells per mm 2 (pooled across region and R0/R1 levels) in the Cooling + Argon group versus Cooling alone (E).On regional assessment, there was a significant decrease in TUNEL-positive cells in the Putamen and evidence of a trend in the Caudate in the Cooling + Argon group versus Cooling alone (F).*p b 0.01, †p = 0.07.Sensorimotor cortex = sTEX; Cingulate cortex = cTEX; Thalamus = THAL; Caudate nucleus = CDT; Putamen = PTMN; Internal capsule = IC; Periventricular white matter = PvWM.
Table 1
Baseline group data and physiological variables throughout the studies.
Time zero was set at the time of reperfusion/resuscitation. Mean and standard deviation (SD) values are presented for the two groups; (i) Cooling (n = 10) and (ii) Cooling +
Table 2
Volume and inotrope requirements.
Table 3
Summary of differences between Cooling and Cooling + Argon MRS results at 48 h.The Cooling + Argon group showed higher levels of NTP/epp and PCr/Pi and lower white matter Lac/NAA compared to Cooling alone.epp = exchangeable phosphate pool; PCr = phosphocreatine; Pi = inorganic phosphate; Lac = lactate; NAA = N acetyl aspartate.Bold emphasis indicates statistical significance.
Table 4
Differences between the Cooling + Argon and Cooling group aEEG scores at the different time points.
The Argon + Cooling group showed evidence of faster recovery of electrical activity compared to Cooling from the time period 13-18 h after hypoxia-ischemia onwards (p b 0.04 for all intervals except for p = 0.06 for 31-36 h).HI: hypoxia-ischemia.Bold emphasis indicates statistical significance.
Table 5
Differences between the Cooling + Argon and Cooling group TUNEL counts for all seven brain regions and overall.
|
v3-fos-license
|
2019-03-17T13:06:57.325Z
|
2017-08-01T00:00:00.000
|
80003755
|
{
"extfieldsofstudy": [
"Physics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/884/1/012131/pdf",
"pdf_hash": "cda4651df6abf2e2b48fb18babe1d5e0c67deb2e",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46520",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e79dc465d3c65b71216c487582c92576ae364615",
"year": 2017
}
|
pes2o/s2orc
|
Combination therapy efficacy of catgut embedding acupuncture and diet intervention on interleukin-6 levels and body mass index in obese patients
Obesity is a major health problem worldwide, affecting more than 500 million adults with an additional 1.5 billion adults classified as overweight. Acupuncture has been recognized as an adjunctive therapy for obesity, and recent evidence suggests its potential to reduce the inflammatory response in adipose tissue, a condition believed to be responsible for obesity-related health problems. Interleukin-6 (IL-6) has been proposed as an important mediator of the inflammatory response in adipose tissue, but the number of studies addressing the issue is still limited. A double-blind, randomized, placebo-controlled trial was conducted with36 obese patients currently receiving dietary intervention. The patients were randomly allocated into the catgut embedding acupuncture group with diet intervention or the sham (placebo) embedding acupuncture group with diet intervention. Catgut embedding therapy was given twice at CV12 Zhongwan, ST25 Tianshu, CV6 Qihai, and SP6 Sanyinjiao acupoints with two week intervals between procedures. The study endpoints were the IL-6 levels in the blood plasma and body mass index (BMI), measured before and after the intervention. We observed a reduction in the IL-6 levels (mean reduction 0.13 pg/mL, 95% CI: 0.03–0.23) and BMI (mean reduction 0.66, 95% CI 0.43–0.88) in the accupuncture group. The average difference in mean reduction of BMI between the accupuncture and sham groups was 0.34 (95% CI: 0.17–0.52). No difference was found in mean IL-6 reduction between the two groups (95% CI: -0.17 to 0.06). The results suggest that acupoint catgut embedding therapy may help reduce IL-6 levels and BMI in obese patients receiving dietary intervention.
Introduction
Obesity is one of the most common medical problems in many developing countries [1]. It is estimated that more than 500 million adults around the world suffer from obesity and 1.5 billion people suffer from being overweight [2,3]. In 2013, the National Health Survey, stated that obesity rates in Indonesia are increasing each year. The obesity rate for adults in Indonesia age 18 and older is 19.7% for men and 32.9% for women. Obesity is defined as a condition in which the amount of adipose tissue in the body is higher than muscle mass (20% or higher than ideal weight) [4]. Obesity can result in the accumulation of adipose tissue which stores triglycerides. Research has also shown that white adipose tissue can produce a bioactive substance called adipokines [5]. Adipose tissue can synthesize and secrete pro-inflammatory cytokines such as leptin, tumor necrosis factor-alpha (TNF- α), and interleukin-6 (IL-6) [1,5]. Obesity, hence potentially increase insulin resistance and type 2 diabetes [3,5]. An increase in visceral fat storage and adipocyte hypertrophy were commonly linked to the degree of inflammation in obese patients [6,7]. Previous study explained the role of inflammation in adipose tissue, especially inflammation caused by macrophages in obesity. The researchers concluded that macrophages infiltrated the adipose tissue in the weight gaining phase and directly contributed to the inflammation status of the patients. This later caused insulin resistance and obesity in rat and human subjects [3,7,8]. The amount of macrophages increased 4 to 5 times in obese adipose tissue [3,7]. However, the cause of this is not completely clear yet. It is thought that macrophages infiltrate adipose tissue as a response to a stress signal from the adipocytes; later these enlarged and insulin-resistant adipocytes become increasingly stressed in the obese condition [3,9]. Macrophages that infiltrate adipose tissue would then be responsible forvarious pro-inflammatory cytokines production, such as TNF-α and IL-6 [6,7,9,10]. Inflammation is a necessary physiological response to recover homeostasis, but chronic inflammation or excessive inflammation can have damaging effects [2]. Research on the inflammation process in obesity began in the 1990s, especially motivated by a demonstration of TNFα expression that showed an increase in the adipose tissue of obese rats [7,11]. The inflammatory source in obesity and its basic mechanism has yet to be fully understood, but pro-inflammatory cytokines have an important role in the process [2].
In obesity, the level of inflammatory cytokines is higher than normal, which contributes to insulin resistance [2,12]. Previous study states that cytokines produced by adipose tissue might be responsible for insulin resistance in obesity. Subcutaneous adipose tissue secretes IL-6, and this secretion might be correlated to a patient's body mass index (BMI) [1]. There is little evidence stating that obesity is marked by a low level of chronic inflammation, hence the many inflammatory reactions and cytokines which affectIL-6 production regulation [6]. IL-6 is a single polypeptide chain that contains 185 amino acids and forms four α-helixs. In healthy people, IL-6 serum concentration is very low, about 3-4 pg/ml (about 1-9 pg/ml in thin and obese people), but it will increase when there is inflammation [6]. IL-6 is a pleiotropic cytokine which affects the inflammatory reaction and contributes to metabolic syndrome [12]. IL-6 plays an important role in immune response, inflammatory reactions, antibody mechanisms, and hematopoiesis [6]. It also regulates inflammation, decreases lipoprotein lipase activity, and regulates appetite and energy intake in the hypothalamus. IL-6 affects the transition process from an acute inflammatory condition to a chronic inflammatory condition in obesity, insulin resistance, inflammatory bowel disease (IBS), arthritis, and sepsis [2].
Acupuncture has long been used as an adjunctive therapy for obesity. Acupuncture can reduce the inflammatory response by decreasing macrophage infiltration into adipose tissue in obese patients. Macrophages are the source of pro-inflammatory adipokines, and acupuncture can decrease the number of macrophages and levels of IL-6 [10]. Thread embedding acupuncture therapy is a stimulation acupuncture method performed by embedding catgut in various acupoints. Thread embedding acupuncture has advantages over body acupuncture because it uses fewer acupoints, can be performed less frequently, and has a prolonged stimulation effect [13,14]. Extensive research has been done to examine the effects of catgut embedding acupuncture on obesity, but data on its efficacy in reducing levels of IL-6 is still limited. Therefore, this study has been conducted to examine the effects of combination therapy using catgut embedding acupuncture and diet intervention on IL-6 levels and BMI in obese patients.
Materials and Methods
This research was approved by Research Ethics Committee of the Faculty of Medicine, Universitas Indonesia and gained approval from the Cipto Mangunkusumo Hospital. All research subjects agreed to participate by signing an informed consent form. All acquired data was guaranteed to be confidential and participation was voluntary without any coercion. The research design was a doubleblind, randomized, clinical test with a control. Research was conducted at the General Hospital of the Cipto Mangunkusumo, in Jakarta, Indonesia. The inclusion criteria for the research were that subjects be 18-60 years old, male or female, have a BMI ranging from 25-29.9, have signed the informed consent, and were willing to participate until the research was completed. The exclusion criteria were as follows: subjects with a casual plasma glucose test >200 mg/dl, in any medical drug therapy or weight loss program, participating in routine workout activity, in anti-inflammatory drug therapy, chronic indigestion (abdominal pain, bloating, distention, defecation disorder, or flatus for more than three months) [15], pain in three or more joints, numb sensation in the morning, nodules in the bones caused by hand arthritis [16], historyof liver and kidney disorder, contraindication to thread embedded acupuncture therapy, a medical emergency, pregnancy, malignancy, blood clotting disorder, anticoagulant drugs consumption, history of allergies to animal protein, and infection or wounds on the acupoint site [15]. The catgut embedded therapy in this research was performed using catgut size 3.0, with 1cm and 0.5cm length, by inserting a 21G needle up to 1.5 cm in acupoints CV6 Qihai, CV12 Zhongwan, bilateral ST25 Tianshu,and unilateral SP6 Sanyinjiao up to two times with a two week interval.
The procedure was performed in the following sequence. The patient was asked to lie on their back. Anesthetic cream was smeared on the acupoint catgut embedded locations CV6 Qihai, CV12 Zhongwan, bilateral ST25 Tianshu, and unilateral SP6 Sanyinjiao. Anesthetic cream was smeared 1cm in diameter from the acupoint and left to absorb into the skin for 30 minutes. The operator used sterile gloves. Asepsis and antisepsis were done on the catgut embedded acupoint location with 70% alcohol and 10% povidone iodine. A size 21G needle, which had been prepared with catgut sized 1cm and 0.5 cm beforehand, was inserted in the designated acupoint up to 1.5 cm deep in a perpendicular position. Catgut measuring 1cm was inserted into acupointsCV6 Qihai, CV12 Zhongwan, and ST25 Tianshu,and catgut measuring 0.5cm was inserted into acupoint SP6 Sanyinjiao. After the 21G needle was inserted, the catgut inside the needle was pushed using a blunt acupuncture needle sized 0.30x50 mm. After the catgut was embedded, the acupuncture needle was withdrawn and, at the same time, the 21G needle was also withdrawn. The operator ensured that there were no catgut ends coming out of the acupoint locations. Pressure was put on the catgut embedded location with an alcohol swab until the bleeding stopped. The catgut embedded location was closed using antibacterial gauze dressing and wound dressing. The entire process was performed one by one on each acupoint location. The first catgut embedding was done on acupoint SP6 Sanyinjiao on the left foot and the second catgut embedding was done on the right foot.
The sham embedding procedure was performed in the same sequence as the catgut procedure, but a 21G needle was not inserted with catgut and only light pressure was applied on identical acupoint locations without wounding the patient. The data collected from this research included the IL-6 levels and BMI of the catgut embedded acupuncture therapy group (treatment group) and sham embedded acupuncture therapy group (control). Evaluation was conducted on day-1 (at the starting point of research) and day-30 (at the end of research). The examination of IL-6 levels was done at the Laboratorium Riset dan Esoterik Prodia using Quantikine HS ELISA Human IL-6 Immunoassay kit with theimmunoassay sandwich enzyme quantitative method.
Statistical analysis of the research output data was done using the SPSS 2.0 program. Data analysis using a statisticaltest relied on a variable that was analyzed. The numeric variable comparative hypothesis test with normal distribution of unpaired groups was done using an unpaired T-test. If the data distribution was not normal, the data transformation was done first to normalize the data.If the data distribution was normal, an unpaired T-test was used. If after transformation, the data distribution was still not normal, then the Mann-Whitney test was used. The numeric variable comparative hypothesis test with normal data distribution of paired groups was done using the paired T-test. If the data distribution was not normal, the data transformation was done first and if the distribution data was normal, a paired T-test was used. Data transformation to normalize distribution was done using the Lg10 function. The comparative hypothesis test result was p > 0.05, which indicatesthat there would beno significant differences between the compared variable, however if p < 0.05, there would be a significant difference between thecompared variables [17].
Results
Research was conducted on 36 obese patients who metall inclusion and exclusion criteria. All subjects were randomly allocated into two groups, the catgut embedded acupuncture group (treatment group) and the sham embedded acupuncture group (control group). Each group contained 18 research subjects. A statistical test on the subjects' early characteristics was done to examine age, gender, weight, abdominal circumference, BMI, and IL-6 levels. There were no significant differences in the subjects' early characteristics, except for age (Table 1). The last IL-6 average level was 0.47 in the catgut embedded acupuncture group and 0.53 in the sham embedded acupuncture group. The average difference between the two groups was not statistically significant (p = 0.34; CI 95% = -0.17-0.06). The early and final average of the IL-6 level comparison in thecatgut embedded acupuncture group was 0.13 and was statistically significant (p = 0.01; CI 95% = 0.03-0.23). The early and final average of the IL-6 level comparison in thesham embedded acupuncture group was 0.01 and was not statistically significant (p = 0.90; CI 95% = -0.10-0.12). The average BMI in thecatgut embedded acupuncture group was 30.24, and 31.14 in the sham embedded acupuncture group. The final BMI average difference between thetwo groups was not statistically significant (p = 0.43; CI 95% = -3.20-1.41). The early and final BMI average difference comparison in thecatgut embedded acupuncture group was 0.66 and statistically significant (p = 0.00; CI 95% = 0.43-0.88). The early and final BMI average difference comparison in thesham embedded acupuncture group was 0.34 and statistically significant (p = 0.00; CI 95% = 0.17-0.52).
Discussion
This research was the first in Indonesia to apply an embedded acupuncture technique on obese patients to examine IL-6 levels and change in BMI. The catgut embedded acupuncture technique was chosen because of its advantages over body acupuncture, which include using fewer acupoints, less frequent therapy, and a prolongedacupoint stimulation effect [13,14]. The prolonge deffects of catgut embedded acupuncture therapy compared tobody acupuncture werelinked to the continous stimulation produced by theacupoint. It was reported that the combined effects of proteolytic enzymes and macrophagesoncatgut absorption could increase and prolong acupoint stimulation to18-21 days, as a result of irritation in thecatgut embedded tissue [18]. This research measured IL-6 levelsinobese patients. Interleukin-6 is a pleiotropic cytokine which affects the inflammatory condition and metabolic syndrome [12]. IL-6 affects the transition process from acute inflammatory condition to chronic inflammatory condition in obesity, insulin resistance, inflammatory bowel disease (IBS), arthritis, and sepsis [2]. Hence, inflammatory disease such as IBS and arthritis was removed from the criteria.
Obesity is linked to chronic inflammation and research has shown an anti-inflammatory effect from acupuncture therapy. The SP6 Sanyinjiao acupoint can effectively control weight and decrease triglyceride levels, cholesterol levels, and pro-inflammatory molecules. The ST25 Tianshu acupoint near the thoracic 10 vertebra is at the same height as the adrenal gland. An increase in adrenal gland production can contribute to obesity and later cause an increase in corticosteroids, which can trigger fat catabolism and causefat redistribution in all areas of the body. Hopefully, acupuncture on this acupoint will help normalize production in thead renal gland, so that decreased fat redistribution can occur. The ST25 Tianshu acupoint was proven to decrease weight and increase peroxisome proliferator-activated receptors [8]. The CV12 Zhongwan acupoint is near the thoracic 7 vertebra, which can cause innervation in the stomach and parasympathetic responses and increase bowel peristalsis. A meta-analysis done by Guo et al. showed that the most frequently used acupoints in catgut embedded therapy were ST25 Tianshu, CV12 Zhongwan, ST40 Fenglong, CV6 Qihai, SP15, SP6 Sanyinjiao, and ST36 Zusanli. These acupoints were called anti-obesity acupoints [19].
The results of this research showed that IL-6 levels were lower in the catgut embedded acupuncture group. This result was slightly different than that of Ismail et al., who found that the average IL-6 levels were statistically significant between the treatment and control groups. In their study, Ismail et al. used a longer research period of six months, which might be one factor that affected the results [20]. The BMI delta average result in this research decreased as much as 0.67 in the catgut embedded acupuncture group and 0.34 in the sham embedded acupuncture group. The BMI delta average in the catgut embedded group was 0.33 lower than the final BMI delta average in the sham embedded acupuncture group and was statistically significant (p = 0.02; CI 95% IK: 0.05 to 0.61). This result supports the findings of Guo et al., Zhang et al., and Vivas et al., where the final BMI in the catgut embedded acupuncture group decreased more than the group that did not receive any embedded therapy.
Research on how acupuncture affects the pro-inflammatory cytokines in obesity is still limited. A few experimental studies have shown that the brain and immune system create two paths, an innervation path and a humoral path. The innervation path informs the brain about the inflammation and other tissue damage so that the brain will produce a local inflammation response. There is also research suggesting that stimulation on the peripheral acupoints could be delivered to the cental nervous system by two paths :a fast transmission pathway involving vagal afferent nerves and nucleus tractus solitarii, and a slow transmission pathway involving cytokines. The stimulation activates the adrenal-neuroendocrine axis so that there is an increase in the secretion of catecholamines. Catecholamines and β2 receptors bond in the immune cell, causing TNF-α, IL-1β, and IL-6 proinflammatory cytokine levels to decrease, and also causing the anti-inflammatory cytokine IL-10 to increase. Acupuncture could cause the expression ofIL-6 pro-inflammatory cytokines to be inhibited by recovering theTh1 and Th2balance.38Another activated immunomodulation path is the cholinergic anti-inflammatory pathway. Acetylcholine and nicotinic acetylcholine receptors bonding (α7nAChR) on macrophages would inhibit pro-inflammatory cytokines synthesis, but would not inhibit anti cytokines inflammation. The cholinergic anti-inflammatory path shows that acupuncture creates a physiologic mechanism that acts on the anti-inflammatory effect and antipyretic activity mediated by the regulation of certain cytokines such as IL-6 and IL-1 [21].
Acupuncture could decrease the inflammatory response in adipose tissue by decreasing MCP-1. Existing data show that increased macrophage infiltration and acupuncture could lower macrophage infiltration so that pro-inflammatory macrophages and adipokine source production could decrease.10 Peroxisome proliferator-activated receptor-γ (PPAR-γ) is a nucleus cell hormone receptor known to affect the metabolism of lipids, glucose, and mesenchymal cell regulation, and has a proven antiinflammatory effect [14,22]. Acupuncture is proven to increase PPAR-γ coactivator 1α activation, which functions as a transcript coactivator to PPAR-γ. PPAR-γ could inhibit pro-inflammatory 6
1234567890
The cytokines released from machrophages [14]. This mechanism could decrease the release of IL-6 proinflammatory cytokines. The ST25 Tianshu acupoint was proven to decrease weight and increase PPAR [9]. Ligand-activated transcription factor is a part of PPAR which is involved in inflammation regulation and metabolic syndrome in obesity. PPAR-γ is considered a main regulator of adipogenesis and has been widely studied for its role in obesity. PPAR-γ is mostly expressed in human adipose tissue, but can also be found in other organs, such as skeletal muscles, lungs, and the colon. PPAR-γ target gene promotes adipocyte differentiation, fat storage, and glucose metabolism including lipase lipoprotein and adiponectin. PPAR-γ in liver is involved in triglyceride homeostasis and protects other tissue from triglyceride accumulation and insulin resistance. There are two molecular mechanisms of the PPAR-γ anti-inflammatory effect: inhibiting the pro-inflammatory transcript factor such as STAT, NF-κB, and protein-1 activator (AP-1);or preventing the elimination of complex corepressor from the gene so that there is inflammation in the gene transcript suppresion. PPAR-γ could overturn the macrophage infiltration condition and reduce inflammation in the gene expression. PPAR-γ could reduce inflammation to activated macrophages by disturbing the NF-κB path signal [7]. Further, pharmacology therapy for treating obesity has been used for over a hundred years. In the past, there were many limitations to the available drugs. Since obesity is a chronic disease, pharmacology therapy was recommended for long term usage, though this resulted in many side effects. These drugs have several serious side effects and teratogenic risks, which make them unsafe for pregnant women or women who are planning to become pregnant. Otherside effects include tachycardia, change in vision, change in mood, insomnia, headache, confusion, constipation, and dry mouth [19]. The advantage of catgut embedded acupuncture therapy is that there are no side effects, unlike the pharmacotherapy used for treating obesity. Additionally, acupuncture can decrease the inflammatory response in adipose tissue [10].
Conclusion
The results suggest that acupoint catgut embedding therapy may help to reduce IL-6 levels and BMI in obese patients receiving dietary intervention.
|
v3-fos-license
|
2018-04-03T03:47:33.335Z
|
2016-03-09T00:00:00.000
|
206980173
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-016-2913-4",
"pdf_hash": "fdc90e15144397ee749a8f9f738c18f53b89042c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46522",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "fdc90e15144397ee749a8f9f738c18f53b89042c",
"year": 2016
}
|
pes2o/s2orc
|
Sleep duration and risk of obesity among a sample of Victorian school children
Background Insufficient sleep is potentially an important modifiable risk factor for obesity and poor physical activity and sedentary behaviours among children. However, inconsistencies across studies highlight the need for more objective measures. This paper examines the relationship between sleep duration and objectively measured physical activity, sedentary time and weight status, among a sample of Victorian Primary School children. Methods A sub-sample of 298 grades four (n = 157) and six (n = 132) Victorian primary school children (aged 9.2-13.2 years) with complete accelerometry and anthropometry data, from 39 schools, were taken from a pilot study of a larger state based cluster randomized control trial in 2013. Data comprised: researcher measured height and weight; accelerometry derived physical activity and sedentary time; and self-reported sleep duration and hypothesised confounding factors (e.g. age, gender and environmental factors). Results Compared with sufficient sleepers (67 %), those with insufficient sleep (<10 hrs/day) were significantly more likely to be overweight (OR 1.97, 95 % CI:1.11-3.48) or obese (OR 2.43, 95 % CI:1.26-4.71). No association between sleep and objectively measured physical activity levels or sedentary time was found. Conclusion The strong positive relationship between weight status and sleep deprivation merits further research though PA and sedentary time do not seem to be involved in the relationship. Strategies to improve sleep duration may help obesity prevention initiatives in the future.
Background
The underlying determinants of childhood overweight and obesity have been the subject of much research globally [1,2] and it has been established that physical activity (PA), sedentary time (ST) and dietary intake are key modifiable risk factors [2][3][4][5]. Recent thinking suggests a widening of the range of potential modifiable factors to consider the association of insufficient sleep [2,6,7]. Findings from recent studies indicate that children who sleep for insufficient durations (<10 hrs) are more likely to exhibit higher Body Mass Index (BMI), waist circumference (WC) and obesity rates [8][9][10].
While some variation in recommendations exist, it is commonly accepted that children aged five to 12 years receive between 9 and 11 hours of sleep per night [11,12]. Internationally average sleep durations have been decreasing over recent decades [13]. Australian data has reported a decline in average sleep durations of approximately 28 mins and 33 mins (for girls and boys respectively) among Australian children between 1985 and 2004 [14]. The global trend in decreasing sleep has occurred at the same time as increases in the prevalence of overweight and obesity [13][14][15]. These trends are supported by studies that found insufficient sleep among children was associated with an increased risk of overweight and obesity [16][17][18][19], reduced physical activity and increased sedentary behaviours [9,16,17]. Children's increased screen time and use of electronic devices around bedtime has also been theorised as a potential link to these trends [19].
A reoccurring issue of available literature on children's sleep, weight status PA and SB is the predominant utilisation of subjective measures [14,17]. While some studies have incorporated objective measures, some findings are conflicting and inconsistencies in measurement methods make comparison of results difficult [20]. Findings from two studies utilising wrist-worn accelerometers [21,22] and one using hip-worn accelerometry [23] to objectively measure children's activity have contrasted greatly. Gupta et al. (2002) reported that poor sleep over a 24 hour period lead to significant reductions in next day PA [22], while Eskedt et al. (2013) reported no significant association between sleep duration and PA levels over a 7 day period [21]. Contradicting both of these studies, hip-worn accelerometry data from Nixon et al. (2008) found that higher levels of daytime activity were associated with reduced sleep durations [23]. Factors such as not separating children from adolescents (as sleep needs change from childhood to adolescence) may have influenced results among the sample of children aged 11-16 years old in Gupta et al. [22]. The restricted length of monitoring in this study (24 hrs) may also not be fully representative of usual behaviour [24]. On top of these, the wear-site of accelerometers has been reported to influence readings, with a validation study reporting that wrist accelerometry data tends to significantly over estimate children's PA and underestimate ST [20].
As Australian studies investigating the relationship between children's sleep, weight status PA and SB have been cross-sectional in nature and reliant on subjective measures [17,[25][26][27], the use of hip worn accelerometry to objectively measure children's PA and SB would strengthen our understanding of the association between sleep and these factors among Australian children [17,[25][26][27].
In this study we determine the relationship between PA, ST and weight status on sleep duration, among a sample of Victorian Primary School children, using objective measures of PA and ST.
In this paper it is hypothesised that, compared with sufficient sleepers, children with insufficient sleep durations will more likely be overweight or obese and record lower average daily PA and higher average daily ST. It is also hypothesised that environmental factors around increased screen behaviours (the number of TVs per house and having a TV or electronic gaming device in the bedroom) will significantly reduce children's sleep duration.
Methods
This study utilised a sub-sample of cross-sectional data on grades four and six primary school children from a pilot study (2013), of a state-based cluster randomised control trial, the methods of which have previously been published (Strugnell et al., under review).
A random sample of 156 primary schools were invited to participate, of which 39 consented (school-level response rate (RR) = 25 %). All grades four and six students within these schools were then invited to participate and provided with plain language statements and consent forms. This pilot study used opt-in consent, where participation required a parent/guardian signed consent form. Of the 2,357 invited, 839 students returned the consent form, enabling their participation (student-level RR = 35.6 %). In order to analyse objectively measured PA and ST, only data from a sub-sample of students who were provided with an accelerometer (N = 373) was used.
Anthropometric measures
Trained data collectors collected measurements on children's height (HM200P Stadiometer, Charder Electronic Co, Ltd), weight (UC-321 scale, A&D Australasia Pty Ltd) and waist circumference (WC) (Lufkin W606PM metal tape measure, Apex Tool Group, LLC), following the procedure previously published [28]. Children's measurements were taken wearing one layer of light clothing (e.g. t-shirt and pants) with shoes and jumpers removed. All measurements were taken twice, to the nearest 0.1 cm for height and waist and the nearest 0.01 kg for weight, with a third measurement required for any discrepancies greater than 0.5 cm or 0.5 kg. The mean height and weight measurements were used to generate BMI-z scores according to the WHO international BMI growth reference standards [29].
Sleep duration
Children were asked "During the past 7 days, how much time did you usually spend sleeping per night?" and then selected one of the nine options ("less than 5 hours", "5 hours" "6 hours", "7 hours", "8 hours", "9 hours" "10 hours", "11 hours", or "more than 12 hours"). Aligning with recommendations by the Australian Sleep Health Foundation [12] and classifications used in previous literature [17,30,31], sufficient sleep was categorised as 10 or more hours of sleep per night and less than this as insufficient sleep.
Physical activity and sedentary time
Every second student across genders and grade levels (e.g. 1 st , 3 rd , 5 th for boys and girls in Grade 6 etc.) were invited to wear a waist-worn ActiGraph GT3X or GT3X+ accelerometer (ActiGraph, Pensacola, FL) to collect objectively measured data on daily PA and ST. Participants were instructed to wear the monitor on the right hip during waking hours for seven days, except during waterbased or high contact activities (e.g. martial arts).
To reduce the possibility for any discrepancies between ActiGraph models they were both programed with a 15-second epoch and a 30-hertz sampling rate [32]. A valid day was considered if ≥600 minutes per day (mins.d −1 ) of wear time was recorded, according to the criteria of wear/non-wear time by Troiano and colleagues (2008) [33], over a minimum of three days. Average daily moderate to vigorous physical activity (MVPA), light intensity physical activity (LPA) and ST were determined by summing the time spent in each category according to 3-axial activity count cut-points developed for children aged 10-15 years [34]. Using the average day method outlined by [35] total average MVPA time was dichotomised according to the Australian PA guidelines for children, into an average of ≥60 min/day (guidelines met) or an average of <60 min/day (guidelines not met) [36,37], allowing for categorical analysis of children's PA. As there are currently no specific daily ST guidelines (except for the restriction of screen time to 2 hrs/day) or daily LPA [38,39], low, moderate and high tertiles were created to enable categorical comparison of these variables with sleep duration.
Demographic variables
Demographic items included age, gender and whether English was the only language spoken at home.
Confounding variables
Participants were asked to report on environmental factors surrounding sedentary behaviours, the presence of a TV in the bedroom (Yes/No) as well as having an electronic gaming device (EGD) or laptop/computer in the bedroom (Yes/No). Clustering by Local Government Areas was included to account for the sampling design.
Statistical analysis
STATA SE12 (Stata Corporation, Texas, USA) was used for all statistical analysis. Aligned with recommendations from previous studies [16,24,[40][41], only participants with ≥3 days of accelerometry wear-time data were included for analysis (N = 298; 80 % of possible participants). Parametric tests for normality were conducted on all independent variables, resulting in participants with values ±3SD from the mean on independent variables of MVPA, LPA, ST, BMI, and WC being omitted from the relevant analysis (n = 9).
Chi-squared and paired t-tests were used to assess gender differences for each variable. Chi-square tests were conducted to assess potential differences between insufficient and sufficient sleep duration and weight status, LPA, MVPA and ST, using the categorical expression of each variable. Lastly, multivariate hierarchical logistic regression analyses were used to calculate the effect of sleep duration on weight status after adjusting for hypothesised covariates including age, gender, Socio-Economic Indexes for Areas (SEIFA), PA, ST, TV in bedroom and computer/EGD in the bedroom.
Results
Of the 289 participants (56 % girls; 54.3 % grade four) with valid accelerometer data 30.5 % were categorised as overweight or obese (36.2 % boys, 25.9 % girls), 87 % came from English speaking homes and 52.5 % lived in areas categorised as being in the lower two SEIFA quintiles (Table 1).
A third (33.2 %) of all participants were classified as receiving insufficient sleep, with no significant gender differences (x 2 = 9.68, p = 0.84). Of those categorised as insufficient sleepers, 39.6 % were also classified as either overweight (25.0 %) or obese (14.6 %). This proportion was significantly greater than the proportion of sufficient sleepers categorised as overweight or obese (25.9 %; x 2 = 5.66, p = 0.02).
No significant difference was found (x 2 = 3.46, p = 0.12; and x 2 = 2.21, p = 0.33, respectively) between the proportion of children categorised as overweight or obese with or without accelerometry data, or between participants with valid or invalid accelerometry wear-time. However, sleep categorisation was found to differ significantly between the excluded participants and analysed sample (x 2 = 9.72, p = 0.00), with the current sample displaying an under-representation of overweight/obese insufficient sleeper (13 %) compared to non-analysed (16 %).
A multivariate hierarchical logistic regression (Model 1; Table 2) shows that insufficient sleepers were more likely to be in the overweight range (OR 1.72; 95 % CI:1.10-2.68) compared to the normal weight range but no such association was observed for those in the obese range (OR 1.80; 95 % CI:0.94-3.45). Adjustment of the initial model for age, gender, SEIFA and study condition (Model 2; Table 2) shows insufficient sleepers more likely to be categorised as overweight (OR 1.88; 95 % CI:1.14-3. 13)
Discussion
The hypothesis that children with insufficient sleep are more likely to be categorised as being overweight or obese, less physically active and more sedentary was partly supported. Using objective measures of height and weight we found that weight status was inversely associated with children's self-reported sleep duration. In contrast to previous studies no relationship was observed between PA, ST and sleep duration. However, in the current study age and presence of computer/EGD in the bedroom were inversely associated with children's sleep duration.
In this study, a third of participants were categorised as insufficient sleepers. The high levels of sleep deprivation among the current sample is comparable with that reported by Shi et al. (2010), where 48.2 % of their sample of 3,495 South Australian children (5-15 years) were reported sleeping less than 10 hours per night by their parents [17]. Other estimates of insufficient sleep are higher; a nationally representative sample of 6,324 Australian 7-15 year olds found that almost two-thirds (62.4 %) reported not getting the recommended 10+ hours of sleep per night [26]. This may suggest that the national prevalence of sleep deprivation among children could be an even more substantial issue outside of the two states of South Australia and Victoria, or most likely be due to the difference in self-report versus parent-report measures used.
Despite these disparities, the findings about the size and direction for the relationship between sleep and children's weight status found in this study are consistent among local and international studies such as those by Shi et al. (2010) and Seegers et al. (2011) [17,42]. Our results suggest that children reporting insufficient sleep were almost twice as likely to be overweight and almost two and a half times more likely to be obese than those meeting sleep duration recommendations. Our study supports international findings from 1,916 10-13 year olds in Quebec showing insufficient or shorter sleep durations among children increased the odds of being obese by 1.41 (95 % CI:1.24, 1.61) times [42].
As several national sleep guidelines have been extended recently, suggesting nine and potentially eight hours of sleep per night might be sufficient for some children [12,43], additional analyses were conducted to explore how adjusting the categorization of insufficient sleep influences results among the current sample. While our initial analysis indicated sleeping ≤9 hours per night was shown to increase the odds of overweight and obesity compared with sleeping ≥10 hours, when recategorising insufficient sleep as ≤8 or ≤7 hours of sleep per night no significant association was found. However, the reduced power due to the low representation of participants in these categories (only 16 % sleeping ≤8 hours per night and only 9 % sleeping ≤7 hours) makes it difficult to assume that these cut-points would not reach significance among larger samples. This highlights the importance to consider the cut-points being used to determine sufficient sleep in future studies, as well as the need for larger more representative samples. We found no relationship between children's sleep duration and average MVPA, LPA and ST, which contrast findings from previous studies [9,16,17,21]. The lack of association between these factors in the current study could be due to a number of reasons including: the differentiation between previous subjectively measured PA and ST compared with the current objective accelerometry data; the slightly smaller sample size due to the restriction of available accelerometers; or due to characteristics surrounding the type of participants who adhered to wear-time requirements and therefore were included in the analysis.
There has been some indication that participants with higher BMI, waist circumference and sedentary behaviours may be less likely to meet wear-time requirements [44]. In such cases, results may not fully represent the study population and may have produced an overestimation of the sample population's PA and underrepresentation of average ST [44]. However, our analysis does not support this suggesting no significant weight status differences between children with valid versus invalid accelerometry wear-time, or between participants with or without accelerometry data.
Although we found no significant association between sleep duration and children's ST in the current study, it is interesting to note that having a computer/EGD in the bedroom was associated with children's sleep durations. Children who reported having a computer/gaming console in the bedroom had twice the odds for not receiving the recommended ≥10 hours of sleep per night. One potential explanation for this relationship suggests that sedentary behaviours involving computer/EGD directly deduct from children's sleep duration by interfering with time that should be dedicated to sleeping, but not necessarily influencing their overall average daily ST [45]. It has also been suggested that exposure to artificial light created by these screen behaviours may lead to disruptions in the circadian rhythm, with an increase in alertness and decreased sleep onset and duration [46]. It may be useful for future research to examine the times of day that children engage in screen based activities so that recommendations can be extended to include guidelines around screen-free times in order to promote sufficient sleep.
Another possible explanation for the association between computers/EGDs and insufficient sleep among children may be linked within the significant inverse association between children's sleep and their age, demonstrated in the current and previous studies [17,21,47]. While literature suggests this age associated decline in sleep duration could be related to later bed times of older children without adjusting wake-up times [14,48], it has been proposed that it might be due the higher usage of electronic devices (such as computers, phones and TVs) in older children/adolescents [9,21,[47][48][49].
More work is need to gain a deeper understanding on the modifiers and confounders of the association between children's sleep and weight status. Despite not finding an association between objectively measured average daily PA and ST, our findings of an association between electronic devices such as computers/EGD and reduced sleep duration highlights the need to better understand how these behaviours influence children's sleep. Future studies would also benefit from more stringent measures on sleep duration (such as accelerometers).
Conclusion
Among this sample of Victorian primary school children sleep duration was inversely associated with weight status, though not between objectively measured PA and ST. Insufficient sleep was significantly higher among children with a computer/EGD in their bedroom. The findings suggest sleep is a plausible target behaviour for obesity prevention initiatives.
Ethics
Ethics approval was received from the Deakin University's Human Research Ethics Committee, the Victorian Department and Early Childhood Development and the Catholic Education Office Archdiocese's of Melbourne, Sale, Sandhurst and Ballarat for this study.
|
v3-fos-license
|
2018-06-05T09:17:01.138Z
|
2018-06-01T00:00:00.000
|
46928155
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://res.mdpi.com/d_attachment/ijms/ijms-19-01666/article_deploy/ijms-19-01666.pdf",
"pdf_hash": "3d53cd39dff56684118c7802511ab663a08ba28a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46524",
"s2fieldsofstudy": [
"Biology",
"Agricultural and Food Sciences",
"Environmental Science"
],
"sha1": "3d53cd39dff56684118c7802511ab663a08ba28a",
"year": 2018
}
|
pes2o/s2orc
|
Wheat Gene TaATG8j Contributes to Stripe Rust Resistance
Autophagy-related 8 (ATG8) protein has been reported to be involved in plant’s innate immune response, but it is not clear whether such genes play a similar role in cereal crops against obligate biotrophic fungal pathogens. Here, we reported an ATG8 gene from wheat (Triticum aestivum), designated TaATG8j. This gene has three copies located in chromosomes 2AS, 2BS, and 2DS. The transcriptions of all three copies were upregulated in plants of the wheat cultivar Suwon 11, inoculated with an avirulent race (CYR23) of Puccinia striiformis f. sp. tritici (Pst), the causal fungal pathogen of stripe rust. The transient expression of TaATG8j in Nicotiana benthamiana showed that TaATG8j proteins were distributed throughout the cytoplasm, but mainly in the nucleus and plasma membrane. The overexpression of TaATG8j in N. benthamiana slightly delayed the cell death caused by the mouse apoptotic protein BAX (BCL2-associated X protein). However, the expression of TaATG8j in yeast (Schizosaccharomyces pombe) induced cell death. The virus-induced gene silencing of all TaATG8j copies rendered Suwon 11 susceptible to the avirulent Pst race CYR23, accompanied by an increased fungal biomass and a decreased necrotic area per infection site. These results indicate that TaATG8j contributes to wheat resistance against stripe rust fungus by regulating cell death, providing information for the understanding of the mechanisms of wheat resistance to the stripe rust pathogen.
Introduction
Plants are subjected to various biotic and abiotic stresses [1]. Upon environmental stimuli, autophagy is upregulated to sense the intracellular stimuli and mount reactions to protect plants from damage [2]. Autophagy, which is also called self-eating, is a catabolic process in eukaryotic cells that cleans up excessive or unwanted cellular components and macromolecules for nutrient recycling [3,4]. Throughout this process, cytoplasmic components, for example, proteins and different organelles, are targeted to either lysosomes/endosomes or vacuoles for degradation inside these compartments [5]. As a highly regulated and dynamic process that degrades dysfunctional or
Identification and Isolation of the TaATG8j Gene
Based on the expressed sequence tag (EST) sequence (GenBank Accession No. GR302394.1) from the cDNA library of a compatible wheat-Pst interaction, a 695-bp cDNA sequence with a 360-bp open reading frame(ORF) was obtained from Su11 wheat, which encoded an ATG8-heterologous protein. A BLASTN (Basis Local Alignment Search Tool Neuclitide) search in the Chinese Spring genomic database revealed three copies of this gene in the wheat genome ( Figure S1), located on the wheat chromosomes 2AS, 2BS and 2DS. To obtain more information about TaATG8j in the wheat sub-genomes, we downloaded the exon-intron structure and assembled exon sequences, designated TaATG8j-2AS, TaATG8j-2BS and TaATG8j-2DS, respectively ( Figure S1 and Table 1). The TaATG8j gene from Su11 shared 99.72%, 98.61% and 98.34% nucleotide identity with the three copies of TaATG8j-2AS, TaATG8j-2BS and TaATG8j-2DS of Chinese Spring in the ORF region ( Figure S1). However, considering the protein sequence, only one amino acid variation was observed in TaATG8j, with the predicted proteins of the three homoeologous genes ( Figure S2). Considering exon structure, all three copies of TaATG8j consist same number(five) of exon ( Figure S3). Plants contain expanded ATG8 family members. Similarly, here, in addition to the TaATG8j gene, we identified nine other ATG8-heterologous proteins in the wheat genome, named TaATG8a-h (GenBank accession numbers KF294807-KF294814), was along with a heterologous protein assembled from 3EST (HX189738, CK196170 and HX250137) sequences [29], designated TaATG8i. Therefore, the ATG8 gene identified in this study was named TaATG8j (Figures 1 and 2). A structural sequence analysis showed that TaATG8j encoded a putative protein of 119 amino acids, with a conserved UBQ (Upiquitin-like) superfamily domain, six (black arrow) predicted tubulin binding sites and three (green arrow) predicted ATG7 binding sites ( Figure 2). Multiprotein sequence alignment with other organisms showed that TaATG8j shared the highest identity (95.93%) with ATG8g from T. aestivum and ATG8 from Triticum dicoccoides. Additionally, TaATG8j shared 82.93%, 82.11%, 90.24%, and 86.18% identity with TaATG8a-f from wheat BdATG8 from Brachypodium distachyon, OsATG8 from rice and ZmATG8a from maize respectively. The lowest identity (less than 70%) was observed for SsATG8 from yeast and TaATG8h-i from wheat ( Figure 2).
Phylogenetic analyses indicated two distinct clades that comprised all members of the ATG8 subfamily. Clade 1 comprised plant members including TaATG8j protein, and Clade 2 was related to human ancestors. Most of the paralogs of Oryza sativa (Os), Glycine max (Gm), Arabidopsis thaliana (At) and T. aestivum (Ta) were in Clade 1, but some of their paralogs were also present in Clade 2, indicating that the ancestor of Clade 1 and its descendants frequently duplicated during genomic evolution ( Figure 1). The lengths of the branches indicate the closeness of the evolutionary relationships between the ATG8 proteins. TaATG8j with other members of ATG8 (autophagy-related 8) family. The phylogenetic tree was constructed using the neighbor-joining method and the software MEGA 7. The internal branches were assessed by bootstrapping values, with 1000 bootstrap replicates, and a branching cut-off of 50%. The branches are considered protein name, GenBank accession number, and one assembled sequence from three EST (expressed sequence tag) sequences (HX189738, CK196170 and HX250137). Ta: Triticum aestivum; Bd: Brachypodium distachyon; Os: Oryza sativa; Zm: Zea mays; Gm: Glycine max; At: Arabidopsis thaliana; Td: Triticum dicoccoides; Sl: Solanum lycopersicum; Nb: Nicotiana benthamiana; Kf: Klebsormidium flaccidum; Sc: Saccharomyces cerevisiae; Ot: Ostreococcus tauri; Eh: Emiliania huxleyi; Hs: Homo sapiens.
TaATG8j Is Induced upon Incompatible Pst Attack
To explore the function of TaATG8j in wheat defense to Pst, the transcript level of TaATG8j was quantified in both compatible and incompatible wheat-Pst interactions using different genome-specific primers (Table S1). In the incompatible interaction, the transcripts of all copies of TaATG8j were significantly upregulated (more than 2.0-fold) at 24 hpi compared with the control (0 hpi), although the upregulation level differed slightly. In contrast, the compatible interaction suppressed these copies ( Figure 3). These results indicate that all three copies (TaATG8j-2AS, TaATG8j-2BS and TaATG8j-2DS) of TaATG8j were upregulated in the early stage of the incompatible interaction (24 hpi). The wheat seedlings were inoculated with CYR23 (incompatible) and CYR31 (compatible) and sampled at 0, 6, 12, 24, 48 and 120 hpi. The data were standardized to the wheat elongation factor TaEF-1α gene. The relative expression level of the gene was quantified using the comparative threshold (2 −∆∆Ct ) method. Error bars indicate the variation among three replications, and the different letters represent significant differences (p < 0.05) by the Tukey HSD (Honestly Significant Difference) test. hpi: hours post-inoculation.
Protein TaATG8j Is Distributed throughout the Cytoplasm but Mainly in Nuclei and Plasma Membranes
The localization of ATG8j proteins on the surface of membranes is necessary for autophagy biogenesis [29]. To determine the subcellular localization of TaATG8j in the leaf tissue of N. benthamiana, recombinant PB-eGFP-TaATG8j was infiltrated into N. benthamiana. The empty vector (EV) P BinGFP2 -GFP was used as a control. Using DAPI (4 ,6-diamidino-2-phenylindole) and a hypertonic solution, fluorescence microscopy revealed that GFP-TaATG8j fusion proteins were distributed throughout the cytoplasm, especially the nucleus and plasma membrane ( Figure 4A). To confirm this result, we further conducted Western blot to analyze the stability of the TaATG8j fusion protein. We successfully observed stable TaATG8j fusion proteins and EV-P BinGFP2 -GFP proteins at 40 and 27 kDa, respectively ( Figure 4B).
TaATG8j Delays Cell Death Triggered by BAX in N. benthamiana
To investigate the potential role of TaATG8j in Programmed cell death (PCD), we overexpressed TaATG8j in N. benthamiana through the A. tumefaciens (GV3101)-mediated infiltration assay. EV and Avr1b were used as negative and positive controls, respectively. When expressed alone, EV, Avr1b or TaATG8j did not cause cell death ( Figure 5). Five days post-infiltration with A. tumefaciens carrying PVX-BAX, obvious cell death was observed in the sites infiltrated with EV, suggesting its cell induction role. When co-expressed with Avr1b, BAX did not cause cell death, indicating that it was suppressed by Avr1b. For TaATG8j, slight cell death was detected at five days after BAX infiltration, while significant cell death was observed at six days post infiltration ( Figure 5). These results indicated that TaATG8j alone was not able to induce cell death but rather delayed the cell death triggered by BAX. The N. benthamiana leaf was infiltrated with Agrobacterium tumefaciens cells carrying TaATG8j (circles 3 and 6), Avr1b (circles 2 and 5) or EV (circles 1 and 4). Photographs were taken at 5 and 6 days after the second infiltration (the second infiltration was done only in circles 1, 2 and 3 at 24 h after the first infiltration using A. tumefaciens carrying PVX:BAX). The circular areas indicate the infiltrated spaces.
Overexpression of TaATG8j Induces Cell Death in Yeast
After overexpression in N. benthamiana cells, we further investigated the roles of TaATG8j in cell death using a fission yeast system. TaATG8j was overexpressed in fission yeast (S. pombe) governed by the nmt promoter, which is suppressed by thiamine (VB). The mouse BAX gene, a pro-apoptotic factor that induces cell death in yeast, and empty pREP3X vector were used as positive and negative controls, respectively [30]. Yeast cells were cultured with (+VB) or without thiamine (−VB), and the cell death phenotype was checked with trypan blue. As shown in Figure 6A, the dead yeast cells were stained blue. The expression of TaATG8j or BAX led to significantly decreased total alive yeast cells in the absence of thiamine (−VB) compared to that with thiamine (+VB) throughout the incubation period, whereas the expression of pREP3X did not cause any significant change ( Figure 6B). Furthermore, the ratio of dead (stained yeast cells) to total yeast cells was calculated. The results showed that yeast cells expressing BAX incubated without thiamine (−VB) exhibited significantly more dead cells than those incubated with thiamine (+VB) after 14 to 34 h of incubation ( Figure 6C). Similar results were obtained for TaATG8j, except at 14 and 18 h after incubation ( Figure 6C). However, the yeast cells transformed with empty pREP3X did not exhibit any remarkable change in the ratio of dead cells. These results indicated that the overexpression of TaATG8j induced cell death in fission yeast. Figure 6. Overexpression of TaATG8j in yeast cells. The mouse pro-apoptotic BAX gene and pREP3X empty vector were used as positive and negative controls, respectively. Thiamine (VB) and trypan blue were used for repression and as a staining medium, respectively. The yeast cells were counted using a hemocytometer. Three biological replicates were performed. The yeast cell number mL −1 was calculated at 14, 18, 22, 26, 30 and 34 h post-incubation. Incubation was started from an identical OD 600 : 0.2 using ±VB. (A) Phenotypically dead and alive yeast cells were compared by staining trypan blue staining using ±VB after expressing pREP3X_TaATG8j, pREP3X_BAX, or pREP3X. The dead yeast cells were stained blue. Bar: 20 µm; (B) The total number of cells mL −1 was counted. Yeast cells expressing the total number of alive and dead yeast cells of TaATG8j and BAX were significantly altered compared to the control; (C) The percentage (%) of the dead yeast cells of the total cells was calculated. Yeast cells expressing in TaATG8j or BAX showed more cell death compared with the control. Mean data are presented, and error bars indicate the variations among the biological replicates. Asterisks indicate the level (* p < 0.01) of significance without thiamine (VB) using Student's t-test.
Knockdown of TaATG8j Enhances Wheat Susceptibility to Pst
To study the role of the TaATG8j gene in wheat immunity against Pst in more detail, barley stripe mosaic virus (BSMV)-mediated gene silencing (VIGS) was used to silence TaATG8j. Due to the high similarity between the three copies, the two fragments used for silencing resulted in the silencing of the three copies together ( Figure S1). Wheat seedlings inoculated with BSMV:TaATG8j-1s, BSMV:TaATG8j-2s and BSMV:γ exhibited mild chlorotic mosaic symptoms on the third or fourth leaves at 13 dpi, while the seedlings inoculated with BSMV:TaPDS displayed strong photobleaching symptoms ( Figure 7A). These results suggest that the BSMV-VIGS system functioned accurately. Fifteen days post-inoculation with the Pst pathotype CYR23, apparent HR was observed on the fourth leaves of the BSMV:TaATG8j-knockdown wheat seedlings. Sporadic fungal sporulation was observed around the necrotic spots at 18 dpi ( Figure 7D). In contrast, inoculation with the Pst pathotype CYR31 resulted in normal disease development with no remarkable change ( Figure 7C). qRT-PCR analyses showed that the expression of both fragments was significantly reduced in TaATG8j-knockdown plants compared with control plants (Figure 7). Considering BSMV:γ, the highest silencing efficiency was observed with the BSMV:TaATG8j-2s fragment at all time points (more than 80%), and the BSMV:TaATG8j-1s fragment was also significantly silenced. These results indicated that the TaATG8j gene was effectively silenced in the incompatible interaction, and after silencing the TaATG8j gene, the resistant wheat cultivar cv. Su11 became susceptible.
Suppression of Defense-Related Genes in TaATG8j-Knockdown Plants
At the time of plant pathogenic infection, the production of PR proteins in the uninfected parts of the plant can prevent the affected plant from further infection [31,32]. Upon the reduced resistance of wheat Su11 to the avirulent Pst CYR23, we further assessed the expression pattern of defense-related genes in TaATG8j-knockdown plants by qRT-PCR. The results showed that the PR protein genes, TaPR1 and TaPR2, were down-regulated in TaATG8j-1sand TaATG8j-2s-knockdown plants infected by Pst CYR23, particularly at 48, 72 and 120 hpi (Figure 8 and Figure S4). The expression of the PR protein gene TaSOD was also reduced in the TaATG8j-knockdown plants. This result indicates that the accumulation of defense-related genes was suppressed in TaATG8j-knockdown plants.
Histological Observation of Pst Growth and Host Necrotic Cell Death
To illustrate the fungal development in TaATG8j-knockdown plants, WGA staining was used to stain the fungal structure. As shown in Figure 9A,B, the TaATG8j-knockdown plants inoculated with CYR23 had significantly longer hyphae than the BSMV:γ plants at 24,48 and 72 hpi. In addition, the number of branches was significantly increased (p < 0.05) by the second knockdown fragment at 24 and 72 hpi, and more hyphal branches were observed than in the BSMV:γ-infected seedlings ( Figure 9C). In contrast, the necrotic areas were significantly decreased in first-and second-fragment infected leaves, followed by BSMV:γ ( Figure 9D). The overall histological results indicated that silencing TaATG8j enhanced the susceptibility of the Su11 cultivar to Pst and permitted enhanced hyphal growth and branching.
Increased Fungal Biomass in TaATG8j-Knockdown Plants
To assess the fungal mycelium growth in TaATG8j-silenced plants, total genomic DNA was extracted to quantify the fungal biomass using qRT-PCR. At 18 dpi, in TaATG8j-knockdown seedlings infected by Pst CYR23, the fungal content in wheat tissues was significantly (p < 0.05) increased compared with that in the control plants. Relatively more abundant fungal biomass was identified in TaATG8j-2s-knockdown plants than that in the TaATG8j-1s-knockdown plants, which may have been due to the higher silencing efficiency of the second fragment than the first fragment ( Figure 10). Quantification cycle (Cq) values are plotted against the initial copy number of template DNA (10 4 , 10 5 , 10 6 , 10 7 , 10 8 , and 10 9 ). Genomic DNA of Su11, infected with urediniospore of Pst pathotype CYR23, was used to generate the standard curves. Fungal biomass was measured by qRT-PCR on the extracted total genomic DNA from wheat leaves infected with CYR23 at 18 dpi. The ratio of the total fungal to total wheat DNA was calculated using the TaEF-1α and Pst-EF genes for normalization. The letters a, b, c indicates the significant difference using LSD test among biomass in BSMV:γ, BSMV:TaATG8j-1s and BSMV:TaATG8j-2s.
Discussion
Autophagy is conserved among yeasts, animals and plants, and plants retain a majority of the autophagy machinery of yeast. In fact, several core protein families involved in autophagy have expanded in plants. As one of the essential autophagy-associated proteins, the expanded ATG8 family plays multifunctional roles in plants. In wheat, as a hexaploid (AABBDD) crop, nine ATG8-heterologous proteins have been reported [29]. In the present study, we identified and functionally characterized one of the ATG8 genes (TaATG8j) in wheat. Three copies of TaATG8j were located in the wheat genome. They shared high similarity with the other ATG8 members, which may suggest potential redundancy, or on the other hand, that many similar but different proteins or enzymes are required for the overall autophagy process.
During the wheat-Pst incompatible interaction, TaATG8j is specifically induced in the wheat tissue response to avirulent Pst, suggesting its potential involvement in the basal defense activated by avirulent rust fungus. Heterologous expression in fission yeast revealed a pro-cell-death function of TaATG8j. The silencing of TaATG8j resulted in reduced necrotic cell death caused by an avirulent Pst pathotype. During the wheat-Pst incompatible interaction, HR surrounding the infection sites with rapid and robust cell death is the main form of wheat resistance. The suppressed necrotic cell death indicated a prohibited HR in TaATG8j-knockdown plants, which led to the enhanced growth and development of Pst. It appears that TaATG8j functions positively to promote cell death during HR to defend against biotrophic pathogen attack. However, the overexpression of TaATG8j in N. benthamiana delayed cell death caused by BAX, exhibiting a pro-survival role. These results indicate that TaATG8j may play different roles under different conditions. In fact, there is still controversy about autophagic activities in cell death. Autophagy has been reported to be involved in both cell survival and cell death, which therefore to restrict or promotes PCD at the site of pathogen infection. In plants' defensive response against biotrophic or necrotrophic fungi, PR proteins control pathogen invasion [9]. In stress conditions, autophagy triggers the cell survival (antideath) mechanism to cope with the adverse situation [33,34]. In contrast, in certain physiological or developmental conditions, autophagy is considered a nonapoptotic (autophagic or type II) form of cell death (pro-death) [35,36]. When there is limited nutrition, autophagy promotes or restricts PCD in specific pathological and developmental situations in eukaryotic cells, maintaining the adaptive and homeostatic balance [37]. The dual role of the plant immune system diagnoses pathogen-associated molecular patterns (PAMPs) by PAMP-triggered immunity (PTI) and evolves resistance proteins, which recognize the pathogen effector proteins and induces effector-triggered immunity (ETI). However, in most of the cases, PTI and ETI are related to PCD at the site of microbial invasion, which is considered HR [38].
Effector-triggered immunity is the main resistance mode of wheat to the biotrophic stripe rust fungus. As the typical characteristics of ETI, we measured HR, number of branches, hyphal length, and necrotic areas to determine the capability of Pst to form colonies in inoculated wheat tissue. The accumulation of PR proteins significantly decreased when TaPR1, TaPR2 and TaSOD primers were used in TaATG8j-knockdown plants (Figure 8), supporting a positive role of TaATG8j in the activation of the plant resistance response. For further confirmation of the function of TaATG8j, we assayed the fungal biomass in both BSMV:TaATG8j-1s-and BSMV:TaATG8j-2s-knockdown plants compared with the control (BSMV:γ) following infection with the avirulent Pst pathotype CYR 23. The Pst biomass in the knockdown plants was significantly increased (Figure 10), suggesting that TaATG8j proteins contribute to resistance to Pst by limiting Pst growth.
Determination of the subcellular localization may provide information about the functions of TaATG8j. In A. thaliana, ATG8 proteins penetrate the autophagosomes and vacuoles and then degrade or become attached to the outer membrane of the vacuoles, followed by removal from the outside membrane [18]. ATG8 proteins are scattered throughout a preautophagosomal structure under basal and low-nitrogen conditions [39]. ATG8 proteins participate in both autophagy and cytoplasm-to-vacuole transportation [20]. In wheat, four types of ATG8 proteins are distributed in punctate, possibly autophagosomal structures, suggesting that they may be recruited to autophagic membranes and then participate in autophagy [29]. In the present study, TaATG8j proteins were found also to be randomly distributed throughout the cytoplasm. The localization of TaATG8j proteins in the cytoplasm may suggest its function in autophagic membrane biogenesis. Whether the decreased defense in TaATG8j-knockdown plants is due to the altered autophagy activity needs to be further determined.
Cloning and Sequence Analyses of TaATG8j
Based on the expressed sequence tag (EST) sequence (Accession No. GR302394.1) from the cDNA library of a compatible wheat-Pst interaction [40], the primers TaATG8j-PB-F and TaATG8j-PB-R (Table S1) were used to clone the open reading frame (ORF) of TaATG8j. The amino acid sequence and conserved domains of TaATG8j were analyzed on website for protein translation (Available online: http://insilico.ehu.es/translate/) and NCBI (Available online: https://www.ncbi.nlm.nih.gov/ Structure/cdd/wrpsb.cgi/), respectively. Multi-sequence alignment was conducted using DNAMAN 6.0 software (Lynnon Biosoft, San Ramon, CA, USA). Phylogenetic analysis of TaATG8j protein and other proteins was carried out using the neighbor-joining method with 1000 replicate bootstrap values, using the MEGA7 software (Available online: https://www.megasoftware.net/) (Figures 1 and 2).
Plant and Fungal Materials
The wheat variety Suwon 11, N. benthamiana and Pst pathotypes CYR 31 (virulent) and CYR23 (avirulent) were collected from State Key Laboratory, Northwest A&F University, Yangling, China, and were used in this study. Su11, carrying the stripe rust resistance gene YrSu, is resistant to CYR23 but susceptible to CYR31 [41]. Wheat plants were grown and inoculated with the Pst pathotypes following the methods as described by Kang and Li [42]. The first leaves of wheat at the two-leaf stage were artificially inoculated separately with the fresh urediniospores of the Pst pathotypes CYR23 and CYR31, whereas the mock (control plant) was treated with sterile water. After inoculation, the wheat seedlings were incubated in a dark chamber for 24-36 h at 100% relative humidity and 15 • C temperature under a 16-h photoperiod with florescent white light. For RNA extraction, inoculated wheat leaves were sampled at 0, 6, 12, 24, 48, and 120 h post-inoculation (hpi). At each time point, sampled leaves were immediately submerged in liquid nitrogen and preserved at −80 • C prior to RNA extraction. For each time point, three biological replications were performed.
Extraction of RNA, cDNA Synthesis and qRT-PCR Analysis
RNA was extracted from the collected wheat leaves using the TRIzol reagent method (Invitrogen, Carlsbad, CA, USA), according to the guidelines of the manufacturer. The quality of RNA was determined by gel electrophoresis. Additionally, the RNA concentration was determined using a NanoDrop™1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). The cDNA was produced from 2 µg of total RNA with oligo (dt)18 primers using the RevertAid First Stand cDNA Synthesis kit (Thermo Fisher Scientific, Waltham, MA, USA; Available online: www.thermofisher.com/order/catalog/product/K1621) following the manufacturer's instructions. On the basis of the subgenomic alignment to assess Pst expression, we designed specific primers from subgenomes 2A, 2B and 2D, and then qRT-PCR was performed ( Figure 3). The expression of TaATG8j-2AS, TaATG8j-2BS and TaATG8j-2DS in relation to the compatible and incompatible interactions between wheat and Pst was quantified using qRT-PCR performed on a CFX Connect™ Real-time PCR Detection System (Singapore). The relative expression was controlled using the wheat elongation factor gene TaEF-1α (GenBank accession no. Q03033) and was quantified using the comparative 2 −∆∆Ct method [43]. All reactions were performed in triplicate and with three biological replications. The primers used for qRT-PCR are listed in Table S1.
Subcellular Localization and Immunoblotting of GFP-TaATG8j
To investigate the subcellular localization of TaATG8j, the ORF of TaATG8j was sub cloned into the P BinGFP2 vector using the specific primers TaATG8j-PB-F and TaATG8j-PB-R. Furthermore, the common primers P BinGFP2 -F and P BinGFP2 -R were used to confirm the ligation of the ORF into the P BinGFP2 -GFP vector, which was introduced into the Agrobacterium tumefaciens strain GV3101 by electroporation. When the OD 600 of the culture medium reached 0.7-0.8 (4-5 weeks), we infiltrated the culture into N. benthamiana leaves. Infiltrated plants were incubated in a growth chamber under a 16-h/8-h photoperiod at 22 • C. Two days post-infiltration, leaf tissues were sampled for microscopic study using DAPI (concentration of 5 µg·mL −1 ) to detect whether GFP-TaATG8j was in nuclei. Additionally, hypertonic solution (0.8 M D-mannitol, MW: 182.17) was used to identify the specific position of GFP-TaATG8j. The GFP level of the empty vector and TaATG8j was observed under a fluorescence microscope (Olympus BX-53F, Olympus Corporation, Tokyo, Japan).
For Western blotting, leaves of N. benthamiana carrying eGFP-TaATG8j-PB-3101 were sampled and ground in liquid nitrogen. The total protein was extracted using a protein extraction kit (Solarbio, Beijing Solarbio Science and Technology Co. Ltd., Beijing, China) following the manufacturer's guidelines. Western blotting was performed using 1× SDS-PAGE. Proteins were then transferred onto cellulose blotting membranes (pore size 0.45 mm, Bio-Rad, Hercule, CA, USA), which were incubated in blocking buffer (5% BD-Difco skim milk in 1× TBS and 0.05% Tween 20) for 2 h. The membranes were then incubated with mouse primary antibody (anti-eGFP antibody, Sigma-Aldrich, Shanghai, China) at 1:1000 dilutions to detect the eGFP fusion protein. Moreover, secondary antibody (anti-mouse antibody at 1:5000 dilution, Sigma-Aldrich Co.) and chemiluminescence substrate were used to visualize proteins (Sigma, Tokyo, Japan).
Agrobacterium-Mediated Transient Expression of TaATG8j in N. benthamiana
The coding region of TaATG8j was amplified with the specific primers TaATG8j-ClaI-F and TaATG8j-SalI-R (Table S1) and cloned via ClaI and SalI into the potato virus X (PVX) vector PGR106, resulting in the recombinant PVX-TaATG8j. The reconstructed vectors PVX-Empty Vector (EV), PVX-Avr1b, PVX-BAX and recombinant PVX-TaATG8j were transformed separately into A. tumefaciens (GV3101) via electroporation. The transformed A. tumefaciens strains carrying PVX-EV, PVX-TaATG8j, PVX-BAX or PVX-Avr1b were cultured in LB medium with kanamycin (30 µg mL −1 ) and rifampicin (30 µg·mL −1 ) at 28 • C for 24-48 h. During the log phase, cells were collected by centrifugation at room temperature, washed 2-3 times with 10 mM MgCl 2 and suspended to an OD 600 of 0.2-0.3 with infiltration medium (10 mM·MgCl 2 ). The suspensions were kept in darkness at room temperature for 3-4 h prior to infiltration. A. tumefaciens carrying PVX-EV, PVX-Avr1b, or PVX-TaATG8j were infiltrated into both sides of N. benthamiana leaves with a syringe without a needle. Twenty-four hours later, an A. tumefaciens strain carrying PVX-BAX was injected into the same position in one part of the leaf and was photographed 5 and 6 days after infiltration. Three biological replications were performed, and for each replicate, four N. benthamiana leaves were tested.
Overexpression of TaATG8j in Yeast
The coding sequence of TaATG8j was amplified using primers TaATG8j-Sal I-F and TaATG8j-Sma I-R and cloned into the SalIand SmaI-digested pREP3X vector. The reconstructed pREP3X_BAX, pREP3X and recombinant pREP3X_TaATG8j were transformed into fission yeast (S. pombe) by electroporation; 5 µg mL −1 thiamine (VB) was used to repress the nmt promoter in the pREP3x vector. The positively transformed yeast cells were incubated for 34 h in fresh liquid SD (-Leu) with or without thiamine with a starting optical density at OD 600 : 0.2. The fission yeast cells were then sampled at 14,18,22,26,30 and 34 h post-incubation. The dead cells were stained with trypan blue at a concentration of 10 µM [44] and then counted using a hemocytometer under an OLYMPUS BX-53F microscope. The dead yeast cells appeared blue, and the percentage of dead yeast cells to total yeast cells was determined in at least 10 fields of view. The total alive yeast cells were also counted with and without using VB.
BSMV-Mediated Silencing of TaATG8j in Wheat-Pst Interactions
The cDNA sequence of TaATG8j was aligned with the T. aestivum cv. Chinese Spring (CS) genome using the service provided by the International Wheat Genome Sequencing Consortium (Available online: http://wheat-urgi.versailles.inra.fr/Seq-Repository/BLAST/). Two fragments (105 bp from the coding region and 101 bp from the coding and 3 untranslated regions) exhibiting the highest polymorphism within the gene family and the lowest sequence similarity to other genes were chosen to reconstruct gRNA-based derivative plasmids. The two chosen fragments used for silencing were amplified with the specific primers TaATG8j-PacI-F1, TaATG8j-NotI-R1, TaATG8j-PacI-F2 and TaATG8j-NotI-R2 (Supplementary Table S1) and sub-cloned into γ-RNA of barley stripe mosaic virus (BSMV):RNAs via the NotI and PacI restriction sites to construct the recombinant BSMV:TaATG8j-1s and BSMV:TaATG8j-2s plasmid vectors. The silencing construct plasmids were linearized, and then BSMV:RNA was prepared using an in vitro RNA transcription kit (mMESSAGEmMACHINE; Ambion). For viral inoculation, the transcripts were diluted four times, including BSMV:RNA α, β, γ, γ-TaPDS, TaATG8j-1s and TaATG8j-2s. Transcript (RNA plasmid) was mixed with FES buffer [45], and then the mixtures (each leaf contained α:0.5, β:0.5, γ or γ-TaPDS or TaATG8j-1s or TaATG8j-2s:0.5 µl and FES buffer:9.0 µL) were inoculated on the apical side of the second leaves at the two-leaf stage by mildly rubbing the leaf surface using a gloved finger [46]. The leaves were incubated in the dark at a high humidity at 22-24 • C for 24 h. Subsequently, the virus-inoculated seedlings were shifted into a 23 • C growth chamber, and the virus symptoms were observed at regular intervals. BSMV:TaPDS was used as a positive control for the infection of BSMV. At 13 days post inoculation (dpi), when apparent viral symptoms appeared, the fourth leaf of each seedling was inoculated with fresh urediniospores of CYR23 for an incompatible interaction or CYR31 for a compatible interaction, and the seedlings were incubated at 16 • C with high relative humidity. The Pst-inoculated wheat leaves were collected at 0, 24, 48, 72 and 120 hpi for RNA extraction and histological observation. At 15 dpi, disease symptoms first appeared, and at 18 dpi the disease phenotype was photographed. Leaves were collected (only inoculated with CYR23) for biomass assay. For each assay, three replications were performed, and there were 150 seedlings for each replication.
Histological Study of Fungal Growth in TaATG8j-Knockdown Plants
For histological observation, wheat leaves were decolorized in acetic acid/absolute ethanol (1:1 v/v) and then fixed and cleared in trichloroacetaldehyde hydrate until the leaves were translucent. The cleared leaf segments were examined with a fluorescence microscope (OLYMPUS BX-53F). The autofluorescence of mesophyll cells at the infected site was observed and indicated a necrotic area. The fungal structure was stained with wheat germ agglutinin (WGA) conjugated to Alexa 488 (Invitrogen, Carlsbad, CA, USA) [47]. Only the infection in which an appressorium formed over the stoma was considered an actual penetration with the formation of infection hyphae, and the necrotic areas, haustorial mother cells, hyphal branch and hyphal length were examined. At least 50 infection sites were measured in each treatment. The hyphal length, number of hyphal branches and necrotic areas were calculated using DP2-TWAIN/DP2-BSW software (Olympus Corp, Tokyo, Japan). Error bars indicate the variation among the treatments. The statistical analysis was carried out using the Tukey test (p < 0.05).
Fungal Biomass Assay in TaATG8j-Knockdown Plants
Plant leaf samples were collected at 18 days post-inoculation with Pst CYR23 and the genomic DNA was extracted using the Plant Genomic Extraction Kit (TIANGEN, Beijing, China). The quantification of the Pst biomass was performed by qRT-PCR. The genomic DNA of Su11 leaves and urediniospores of Pst CYR23 were diluted in a gradient and used to prepare standard curves. Wheat TaEF-1α and the constitutively expressed Pst elongation factor Pst-EF [48] were used to quantify the wheat and Pst in the Pst-infected leaves of the BSMV:00-or BSMV:TaATG8j-inoculated plants. The two standard curves were used to perform the relative quantification of Pst and wheat genomic DNA.
Statistical Analyses
Mean values and standard errors were measured using Microsoft Excel, and statistical significance levels were assessed using the SPSS software (SPSS Inc. Chicago, IL, USA).
Conclusions
In conclusion, our study revealed the different roles of TaATG8j in cell death regulation in response to different stimuli. More importantly, our findings suggest that TaATG8j functions as a positive regulator of cell death in HR to promote defense against Pst. Further research is needed to explore the detailed regulatory mechanism of TaATG8j in autophagy and pathogen resistance regulation.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2005-06-17T00:00:00.000
|
11092493
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/280/24/23114.full.pdf",
"pdf_hash": "8f55d176e3b0bda30dab1155b9df18e487b2c25e",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46529",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "f9d17da947e61129769b88b7ee0ceee51e7cae36",
"year": 2005
}
|
pes2o/s2orc
|
Amidation and Structure Relaxation Abolish the Neurotoxicity of the Prion Peptide PrP106–126 in Vivo and in Vitro*
One of the major pathological hallmarks of transmissible spongiform encephalopathies (TSEs) is the accumulation of a pathogenic (scrapie) isoform (PrPSc) of the cellular prion protein (PrPC) primarily in the central nervous system. The synthetic prion peptide PrP106–126 shares many characteristics with PrPSc in that it shows PrPC-dependent neurotoxicity both in vivo and in vitro. Moreover, PrP106–126 in vitro neurotoxicity has been closely associated with the ability to form fibrils. Here, we studied the in vivo neurotoxicity of molecular variants of PrP106–126 toward retinal neurons using electroretinographic recordings in mice after intraocular injections of the peptides. We found that amidation and structure relaxation of PrP106–126 significantly reduced the neurotoxicity in vivo. This was also found in vitro in primary neuronal cultures from mouse and rat brain. Thioflavin T binding studies showed that amidation and structure relaxation significantly reduced the ability of PrP106–126 to attain fibrillar structures in physiological salt solutions. This study hence supports the assumption that the neurotoxic potential of PrP106–126 is closely related to its ability to attain secondary structure.
Prion diseases or transmissible spongiform encephalopathies (TSEs) 1 represent a group of fatal, neurodegenerative diseases including e.g. Creutzfeldt-Jakob disease in humans, scrapie in sheep and goats, chronic wasting disease in deer, and bovine spongiform encephalopathy in cattle. The etiology of the diseases can be sporadic, hereditary, or infective. According to the "protein-only" hypothesis (1), the basic infectious mechanism is thought to be a conformational change of the normal (cellular) prion protein (PrP C ) into the pathogenic (scrapie) PrP Sc catalyzed by PrP Sc itself. Thus, presence of PrP C is a prerequisite for prion infection (2).
PrP C is a glycosylphosphatidylinositol-anchored glycoprotein constitutively expressed on the surface of primarily neuronal cells. It consists of two structurally different parts, namely a C-terminal, globular part mainly ␣-helical in nature and an unstructured, N-terminal part (3,4). Misfolding of PrP C into PrP Sc occurs post-translationally and results in increased -sheet content and a gain of protease resistance. The central region of PrP C linking the unstructured N-terminal part with the globular C-terminal domain is believed to play a pivotal role in these conformational changes (5)(6)(7)(8).
The pathological hallmarks of the TSEs, which are mainly restricted to the central nervous system, include deposition of PrP Sc , vacuolization of gray matter, neuronal death, and neuroinflammation manifested as astrogliosis and activation of microglia cells (9,10). Normally, PrP Sc deposition and neuropathology are spatiotemporally correlated in vivo (11)(12)(13); however, examples of the uncoupling of these events have been reported (14). The molecular mechanism of TSE-associated cell death is poorly understood, although it seems that apoptosis is involved (reviewed by Hetz et al. (15) and Liberski et al. (16)). Up-regulation of pro-apoptotic markers has been found in postmortem brains from Creutzfeldt-Jakob disease patients (17) and has additionally been found to precede the accumulation of PrP Sc in scrapie-infected mice (18). The exact nature of the neurotoxic entity in the TSEs is still debated. Cytotoxicity of purified PrP Sc has been shown in PrP C -expressing neuroblastoma cells in vitro (15); however whether large aggregates or smaller oligomers of PrP Sc are toxic is unknown (19).
A synthetic peptide named PrP106 -126 (numbering corresponding to the human prion protein sequence) resides within the central region of PrP near the N-terminal of the proteaseresistant part of PrP Sc . PrP106 -126 shares many properties with PrP Sc , as it readily forms amyloid fibrils with a high -sheet content (20), shows partial proteinase K resistance (20,21), and is neurotoxic both in vivo (22,23) and in vitro (23)(24)(25)(26)(27)(28). PrP106 -126 contains the palindrome sequence AGAAAAGA, which makes it highly amyloidogenic. In contrast to other synthetic prion protein fragments that induce neuronal death independently of PrP C expression (23,29), the neurotoxicity of PrP106 -126 depends on the expression of endogenous PrP C (23), which makes PrP106 -126 a relevant model for PrP Sc neurotoxicity. The in vivo neurotoxicity has been proposed to be linked to its aggregate-forming behavior and ability to form secondary structure in an aqueous environment (22). PrP106 -126 causes neuronal death via induction of apoptosis (15,22,23), and events such as mitochondrial disruption (30), oxidative stress (31), and activation of caspases (15) have been found to be involved in this process. Also, another prion-derived peptide, PrP118 -135, has been found to cause neuronal death via induction of apoptosis (23). The toxicity of PrP118 -135 is, however, independent of endogenous PrP C expression. The toxicity of PrP106 -126 has been found to be enhanced by activation of microglia cells (27) and astrocytes (25).
The neuroretina represents an easily accessible and fully integrated part of the central nervous system. Normally, there is a very low level of protease activity in the corpus vitreum of young, healthy animals (32), and injections of peptides can be made directly into the posterior chamber of the eye without causing damage to the eye. The neuronal cells of the retina express PrP C (33,34) and are sensitive to PrP106 -126 neurotoxicity, whereas the retinal neurons of PrPϪ/Ϫ mice are resistant to this effect (23). Damage to retinal cells can be quantified with electroretinographic recordings.
We recently showed that C-terminal amidation and structure relaxation of PrP106 -126 significantly reduced the peptide's ability to form amyloid structure in water (35). Amidation of the C terminus has also been indicated by others to induce random coil structure in the peptide and to decrease its propensity to form amyloid fibrils (28).
Here, we found that amidation and structure relaxation of the PrP106 -126 significantly reduced the peptide's in vivo and in vitro neurotoxicity and reduced its ability to form amyloid fibrils in a physiological salt solution. This finding supports the view that the fibril-forming capability of PrP106 -126 is pivotal for its PrP C -dependent neurotoxicity and increases the biological relevance of this peptide as a model for PrP Sc -induced pathology.
MATERIALS AND METHODS
Animals-Adult male C57 black wild-type mice aged 9 -11 weeks, were purchased from Centre d'élevage Janvier (Le Genest St Isle, France) or from Harlan Scandinavia (Allerød, Denmark). PrPϪ/Ϫ mice named Zurich I (36) were bred in the animal facilities at the Institute de Pharmacologie Moléculaire et Cellulaire, CNRS. Wistar Hannover Galas rat pups were purchased from Taconic M&B, Ry, Denmark.
The RG2-amide sequence was designed based on the principles of peptide structure relaxation as shown by Due Larsen and Holm (37). The peptides were dissolved in sterile PBS to a concentration of 1 mM and stored in aliquots at Ϫ20°C until use. For intravitreal injections and for use in the primary mouse embryonal cortical neurons, the peptides were aged for 3 days at room temperature with slight agitation. When non-aged peptides were used for injections, they were kept on ice until use. For cellular experiments with rat cerebellar granular neurons, the peptides were aged for 2 days 37°C at 2 mM in PBS.
Thioflavin T (ThT) Assay for Amyloid Fibril Formation-A thioflavin T assay was performed as described by LeVine (38). Peptides were dissolved to 1 mM in sterile PBS and allowed to incubate at room temperature with slight agitation in the presence of 20 M thioflavin T (1 mM stock solution in water; Sigma T3516), reading the fluorescence each day at 485 nm using an excitation wavelength of 440 nm (with a Spectra Flour Plus microplate fluorometer from Tecan). Readings were normalized to the same gain setting to allow comparisons from sample to sample. When comparing different plates, values were further corrected for background (fluorescence of ThT in water).
Intravitreal Injections-Anesthesia was performed with intraperitoneal injections of pentobarbital (50 mg/kg). Topical application of a local anesthetic (Novesine®) was performed in both eyes. Injections of a 1-l solution were done unilaterally (in the left eye) with a 30-gauge needle introduced into the posterior chamber on the upper pole of the eye directed toward the center of the vitreous body. The injections were performed slowly (over at least 60 s) to allow diffusion of the peptide and to avoid any ocular hypertension and backflow. For electroretinography (ERG) experiments 5-10 animals were used for each treatment group, and for histological analyses 3-5 animals per group were used. For histological analysis of eyes at 15 days post injection (dpi), 3 mice with representative ERG values (close to the group mean) from each treatment group were selected. For histological analyses of retinas at 4 dpi, the mice were not subjected to ERG before they were killed.
ERG Measurements and Statistical Evaluation of Effect-Full field electroretinograms were prepared under dim red light on overnight dark-adapted animals. Anesthesia was performed with a mixture of 2% isoflurane (Forene®, Abbott Laboratories) and oxygen. The pupils of anesthetized animals were dilated with Mydriaticum®. The animals were kept on a heating mat during anesthesia. A ring-shaped recording electrode was placed on the cornea of each eye, and a reference electrode was placed behind each ear. Zero electrodes were placed on the hind legs. Light stimulus (8 ms) was provided by a single flash (10 candelas/ s⅐m 2 ) in front of the animal. The ERGs were recorded using Win7000b (Metrovision, Pérenchies, France). The amplitude of the a-wave was measured from the baseline to the bottom of the a-wave; the b-wave amplitude was measured from the bottom of the a-wave to the peak of the b-wave (Fig. 1). The averaged responses represent the mean of two white flashes (8-ms duration) delivered 2 min apart. ERGs were recorded before and then 4, 7, and 14 days after intravitreal injections. Only animals with approximately similar pre-injection values for the left and right eye were selected for injections. To quantify the effect of the injections, the non-injected eye was used as an internal control, and the absolute values for the ERG a-and b-waves of the injected eye were normalized to the non-injected control eye for each animal. Relative/ normalized values for the different peptide groups were compared with the relative/normalized values for the group injected with the scrambled control peptide by using an unpaired t test.
Tissue Preparation-Mice used for ERG recordings were killed on day 15 after injection, and other groups were killed on day 4 after injection. All mice were killed by cervical dislocation. The eyes were enucleated and fixed in ice-cold 4% paraformaldehyde for a minimum of 24 h and then cryoprotected overnight in PBS containing 20% sucrose. The eyes were then embedded in TissueTek (Sakura) and frozen at Ϫ80°C. Cryosections (10 m) throughout the whole eye were cut on a cryostat. The sections were then dried for 1-2 h at 55°C before they were stored at Ϫ80°C until further use.
TUNEL Test on Cryosections-A TUNEL test (Roche Applied Science in situ cell death detection kit, POD) was performed on frozen sections from animals 4 or 15 dpi according to the manufacturer's instructions with slight modifications. The sections were defrosted for 30 min at room temperature and then washed 2ϫ 5 min in PBS. Blocking was performed in methanol with 3% H 2 O 2 for 30 min, after which the slides were washed 2ϫ 5 min in PBS. Permeabilization was performed in 4°C double distilled water with 0.5% sodium citrate and 0.5% Triton X-100 followed by a short wash in PBS. A positive control was digested with DNase for 20 min at 37°C. A TUNEL mixture was then added to all slides (except a negative control, which received only the label solution from the TUNEL kit). Incubation with the TUNEL mixture was per- formed in a humid atmosphere for 3 h at 37°C. Washing was performed 3ϫ 5 min in PBS at 37°C, and a converter solution from the TUNEL kit was added to all slides. Incubation was performed for 30 min, and the slides were then washed 3ϫ 5 min in PBS. DAB solution (one thawed DAB tablet, Sigma catalog number D5905, was dissolved in 15 ml of PBS; 12 l of 30% H 2 O 2 was added immediately before use) was added, and development was performed for 6 min. The reaction was stopped in distilled water. The slides were dehydrated through a graded series of ethanols and mounted in xylol (Pertex). The slides were analyzed in a light microscope and photographed with a Leica DC300 digital camera.
Primary Mouse Embryonal Cortical Neurons (ECNs) and Toxicity Assay-Primary cortical neuronal cultures were prepared from wild type C57 black embryos. Briefly, the embryos were dissected at embryonal day 13 or 14, and the brains were extracted and rinsed from meninges in 37°C warm Dulbecco's PBS with 1% glucose. The PBS solution was replaced by Neurobasal medium (Invitrogen) with the addition of L-glutamine, 10% inactivated fetal calf serum, B27 supplement, and penicillin/streptomycin. Gentle homogenization was performed with a Pasteur pipette, after which the cells were washed once. The cell concentration was adjusted to the appropriate concentration, and plating of 5 ϫ 10 4 cells/well was performed on Nunc 96-well plates precoated overnight with poly-D-lysine. Incubation was performed at 37°C with 5% CO 2 in a humidified incubator. After 30 min of incubation, the medium was changed. After 24 h of incubation, the medium was replaced by serum-free Neurobasal medium with a B27 supplement, L-glutamine, and penicillin/streptomycin. Peptides were added after 3-5 days of maturation and incubated for 12, 24, and 48 h. Viability was measured with an MTS assay (CellTiter96, AQueous One solution cell proliferation assay; Promega). A 492 nm was measured on a spectrophotometer.
Primary Rat Cerebellar Granular Neurons (CGNs) and Toxicity Assay-CGNs were isolated from 7-8 day-old Wistar Hannover Galas rat pups (Taconic M&B) as described by Patel & Kingsbury (39) with some modifications. Briefly, cerebellar granules were isolated from pups after decapitation. The tissue was suspended in Krebs buffer and digested with trypsin before single cell suspensions were made in Neurobasal A medium supplemented with 10% fetal calf serum, 35 mM KCl, Glu-taMAX-I supplement, penicillin/streptomycin, and 1 mM sodium pyruvate (Invitrogen). The cell concentration was adjusted to the appropriate concentration, and plating of 8 ϫ 10 4 cells/well was performed on 96-well plates precoated with poly-D-lysine (BD Biosciences) and incubated for 2 h at 37°C and 5% CO 2 in a humidified incubator. The medium was changed to serum-free Neurobasal A with B27 (without antioxidants) and the same supplements as above. On day 6 the medium was replaced by fresh Neurobasal A (without phenol red) with the addition of B27 (without antioxidants), the same supplements as above, and peptides to a final concentration of 100 M. Viability was measured day 15 using the WST-1 assay as described in the manufacturer's manual (cell proliferation reagent WST-1; Roche Applied Science). The cells were analyzed in light microscope and photographed with a Leica DC300 digital camera after 9 days of treatment.
Caspase Induction-CGNs were isolated as described above and incubated for 6 days before peptides were added in serum-free Neurobasal A medium (without antioxidants, supplemented with 15 mM KCl and B27) in 96-well black microtiter plates (Optilux, BD Biosciences) precoated with poly-D-lysine. After incubation for 3 days, the induction of caspases was measured in a cell lysate described in the manufacturer's manual (homogenous caspases assay, Roche Applied Science). This assay measures mainly caspase-2, -3, and -7.
Immunocytochemistry on CGN-CGN cultures were grown at 37°C with 5% CO 2 on eight-chamber Lab-Tek Permanox chamber slides (Nalge Nunc) precoated with poly-D-lysine. The cells were fixed in 4% paraformaldehyde in PBS, pH 7.4 (freshly prepared overnight with stirring), for 1 h at room temperature and washed with washing buffer (1% Triton X-100 in PBS, pH 7.4) 3ϫ 5 min with gentle agitation before incubating overnight at 4°C with 1 g/ml of the mouse monoclonal anti-PrP antibody SAF32 (Spi-Bio, Montigny le Bretonneaux, France) or mouse IgG2b (catalog number X0944, DAKO) diluted in PBS, pH 7.4, with 1.5% horse serum. After rinsing, the slides were incubated with biotinylated anti-mouse immunoglobulin antibody diluted 1:200 in PBS, pH 7.4, with 1.5% horse serum for 1 h at room temperature, rinsed, and incubated with ABC reagent for 30 min (antibodies and ABC-reagent Vectastain ABC kit from Vector Laboratories). After rinsing, slides were developed with a DAB solution (one thawed DAB tablet from Sigma was diluted in 15 ml PBS; 12 l of H 2 O 2 was added immediately before use) before rinsing in distilled water. Slides were mounted with Aquamount mounting media (BDH Laboratories, Poole, UK) under glass coverslips.
Amidation and Structure Relaxation of PrP106 -126 Reduce
Its Tendency to Form Amyloid Fibrils in PBS-We analyzed the fibrillation tendency (amyloidogenicity) of the peptides at 1 mM in PBS with the ThT assay where fluorescence was followed for 7 days (Fig. 2). Fibrillation of PrP106 -126 in PBS peaked after 3 days of incubation, after which it started to decline. The PrP106 -126-amide and the RG2-amide showed much lower fibrillation tendency compared with PrP106 -126, although they both rose to levels significantly higher than the scrambled control peptide. The fibrillation of the RG2-amide proceeded slower than that of both PrP106 -126 and PrP106 -126-amide, peaking after 5 days of incubation as opposed to 3 days of incubation.
Fibrillar PrP106 -126 Induces Long Term Changes in the Retina-To test the peptide variants for their in vivo neurotoxicity, we performed intravitreous injections of the 3-day-aged peptides and performed ERG recordings at 4, 7, and 14 dpi. All animals were injected in their left eye only (the right was left as non-injected or PBS-injected controls). No difference was observed between non-injected and PBS-injected eyes (data not shown). Both eyes were measured by ERG, and for each animal the absolute values of the ERG a-and b-waves, respectively, for the injected eye were normalized to the control eye values (the a-wave and b-wave amplitude values from the control eye was set to 100%). This was done to avoid time-to-time variation in the ERG due to its sensitivity to variable factors like anesthesia, room temperature, circadian time, electrical noise, etc. All ERG-wave values are therefore represented as a percentage of control values. Because none of the tested peptides induced significant changes in the latency of either the a-wave nor the b-wave at any time point, this study focuses on the amplitude of these waves only. All of the ERG results are represented in Table I. Injection of a 3-day-aged solution (1 mM) of the scrambled control peptide did not induce a-wave or b-wave amplitude values significantly different from those of the non-injected control eyes at any of the time points tested. The ERG values obtained with the other peptides were therefore tested statistically against the values obtained with the scrambled control peptide.
Injection of a 3-day-aged solution of the PrP106 -126 peptide resulted in relative b-wave values that were significantly reduced compared with the corresponding values from the scrambled control group at 4 dpi (17.96%, p ϭ 0.006), 7 dpi (12.6%, p ϭ 0,02), and 14 dpi (20.22%, p ϭ 0.0007). For the a-wave, the values for the PrP106 -126 (aged) group were significantly lower than the scrambled control group only at 7 dpi (17.88%, p ϭ 0.027), although there was a tendency for the a-wave to be reduced at 4 dpi (11.77%, p ϭ 0.14) and 14 dpi (13.11%, p ϭ 0.16). Injection of a fresh solution of PrP106 -126 yielded a significant reduction of the b-wave (15.87%, p ϭ 0,046) and a non-significant tendency of the a-wave to be reduced (15,79%, p ϭ 0,072) at 4 dpi, only. At 7 dpi there was no reduction of either the a-wave or the b-wave with the freshly dissolved peptide (data not shown). Injection of the aged PrP106 -126 had no significant effect on ERG in PrPϪ/Ϫ mice (data not shown).
Amidation Diminishes the in Vivo Neurotoxicity of PrP106 -126 -To correlate fibrillation propensity with in vivo neurotoxicity, we injected an amidated variant of the PrP106 -126 (PrP106 -126-amide). Injection of a 3-day-aged solution of this peptide had no significant effect on either the b-wave or the a-wave at any time point (Table I). A fresh solution of the PrP106 -126-amide had no significant effect on either the awave or the b-wave at any time point (data not shown).
Structure Relaxation and Amidation Reduces the in Vivo Neurotoxicity of PrP106 -126 -Injection of a 3-day-aged solution of the structure-relaxed and amidated variant of PrP106 -126 (RG2-amide) resulted in no significant reduction of the b-wave at 4 or 7 dpi, but at 14 dpi there was a significant reduction compared with the scrambled group (15.23%, p ϭ 0.012), as shown in Table I. There was no significant reduction of the a-wave at any time point with this peptide. A fresh solution of the RG2-amide had no significant effect on either the a-wave or the b-wave at any time point (data not shown).
Spatiotemporal Correlation of Apoptotic Cell Death and ERG Effect-To correlate the effect observed on the electroretinograms to apoptotic cell death, we performed histological analyses on retinal sections from injected eyes at 4 and 15 dpi. Four days after injection of the aged PrP106 -126, TUNEL-positive nuclei were detected primarily in the retinal ganglion cell layer (Fig. 3B). In the mice injected with the scrambled peptide, only very sparsely distributed TUNEL-positive cells were detected in the outer nuclear layer at 4 dpi (Fig. 3A). At day 15 after injection with the aged PrP106 -126, TUNEL-positive nuclei were detected in all nuclear layers of the retina (Fig. 3F), and a disruption of the ganglion cell layer could be identified (Fig. 3F). Injection of the scrambled control peptide yielded no de-tectable TUNEL-positive nuclei at 15 dpi (Fig. 3E). Interestingly, in mice injected with fresh solutions of the PrP106 -126, no TUNEL-positive nuclei were detected in the ganglion cell layer at either 1 or 4 dpi, whereas a significant amount was detected in the outer nuclear layer at both of these time points (Fig. 4, E and F). The same pattern was observed at 1 dpi with the aged PrP106 -126 (Fig. 4G).
The PrP106 -126-amide did not induce TUNEL-positive nuclei at 4 dpi (Fig. 3C), whereas at 15 dpi some weakly TUNELpositive nuclei could be detected in the ganglion cell layer (Fig. 3G). Injection of the RG2-amide resulted in some weakly TUNEL-positive cells in the ganglion cell layer and the inner nuclear layer at 4 dpi (Fig. 3D). This effect was even more pronounced at 15 dpi (Fig. 3H).
Cytotoxicity in Primary Neurons Supports in Vivo Results-
The short term neurotoxicity of the PrP106 -126 variants was tested in primary murine ECNs. For this test the peptides were aged under the same conditions as those used for intravitreal injections (3 days at room temperature). The cells were incubated with the different peptides at the concentrations 40, 80, and 160 M for 24 and 48 h. PrP106 -126 had a time-and concentration-dependent effect on cell viability (Fig. 5A), whereas the PrP106 -126-amide showed only a weak tendency to be toxic after 48 h of treatment with 160 M (Fig. 5B). The RG2-amide and the scrambled control peptide showed no significant toxicity in this system (Fig. 5, C and D, respectively).
The long term neurotoxic effect in vitro was tested in another cellular system (rat CGNs). Immunocytochemistry on these cells revealed that PrP C was expressed on both the soma and the axons of the cells (Fig. 6E). In this setup, peptides were added to the cells 6 days after plating and left to incubate for 9 days. Initial experiments showed that pre-aging the peptides for 2 days in PBS actually reduced their toxicity in this system compared with addition of freshly dissolved peptides (Table II). Therefore, we decided to use freshly dissolved peptides, expecting that the peptides would aggregate during the 9 days of incubation with the cells. Incubation with 100 M PrP106 -126 resulted in a significant reduction in viability compared with controls (56%). Cells treated with 100 M PrP106 -126-amide form or the scrambled control peptide showed no significant reduction in viability after 9 days of treatment (Table II).
Morphological Changes in CGNs Treated with PrP106 -126 -Nine days of treatment with the PrP106 -126 peptide induced clear morphological changes in the CGNs (several shrunken cells, Fig. 6A). Slight morphological changes could be seen after 9 days of incubation with either the PrP106 -126amide or the RG2-amide, as some sparsely distributed shrunken cells could be observed (Fig. 6, B and C), although neither of these peptides resulted in measurable reductions in viability. No morphological changes could be observed in the cells treated with the scrambled control peptide (Fig. 6D).
Caspase Induction in CGN-The ability of the peptides to induce general caspase activity was tested in the CGN cultures. Three days of incubation with PrP106 -126 (100 M) resulted in an increased level of general caspase activity (Ϸ35%) compared with control media levels (Fig. 7). The toxin camptothecin (a DNA topoisomerase I inhibitor) induced a caspase activity level that was Ϸ50% above that of the media control. Incubation with the RG2-amide resulted in a Ϸ30% increase, whereas both the PrP106 -126-amide and the scrambled control peptide resulted in levels of caspase activity close to the media control levels. Only the PrP106 -126 and the camptothecin-induced caspase levels that were significantly increased compared with that of the scrambled control peptide (unpaired t test).
DISCUSSION
Most of previous work with PrP106 -126 utilized the unmodified acid form of the peptide, i.e. the peptide with a free C-terminal carboxylate group, although this is rarely indicated precisely. As shown here and previously (28,35), the amyloidogenicity of the free PrP106 -126 peptide differs greatly from that of the PrP106 -126-amide, and, as a whole, the amyloidogenicity of PrP106 -126 is very sensitive to minor molecular modifications including oxidation, structure relaxation, and stabilization (35). The delicate nature of PrP106 -126 could explain some of the contradictory findings reported on the neurotoxicity of this peptide (40 -42). We therefore wanted to study the possible effects of a minor modification (C-terminal amidation) on the toxicity of the PrP106 -126 peptide. Amidation is clearly biologically relevant considering the location of PrP106 -126 internally in the PrP polypeptide.
We found that the ability to form amyloid fibrils was closely correlated with the neurotoxicity of PrP106 -126, as amidation reduced both the amyloidogenicity and neurotoxicity of the peptide. Many other synthetic peptides similarly show fibrillogenic and neurotoxic properties, but PrP106 -126 is unique in that it requires the expression of endogenous PrP C to exert its toxicity. This parallels the action of PrP Sc (15,43) and makes PrP106 -126 an excellent model for TSE pathogenesis. There are several advantages of using PrP106 -126 instead of PrP Sc itself. First, the biological effects of molecular modifications can easily be studied, and, second, the PrP Sc preparations are never completely pure and contain other proteins, lipids, carbohydrates, nucleic acids, and traces of metals and detergents that could interfere with the measurements (44).
Other PrP-derived peptides seem to exert their toxicity through general cytotoxic mechanisms shared with other, non-PrP-derived peptides such as A1-42, ␣-synuclein, and the islet amyloid polypeptide. Much evidence indicates that these mechanisms involve perturbation of membrane integrity and oxidative stress that do not rely on the presence of mature fibrils but rather on the formation of soluble oligomers with fusogenic properties (45). For instance, the PrP-derived peptides PrP105-132 and PrP118 -135 are able to insert into and perturb the integrity of the cell membrane of both PrP-expressing and PrP Ϫ/Ϫ cells (23,29). The toxicity of PrP105-132 and PrP118 -135 is thus independent of PrP C -expression and amyloid fibril formation. This may be relevant to the in vivo neuropathogenic process involved in rare cases of TSEs such as Gerstmann-Strä ussler-Scheinker disease, but it cannot be taken as a general mechanism.
The correlation between secondary structure and PrP C -dependent toxicity of PrP106 -126 indicates that a well defined conformation of the peptide is a prerequisite for the interaction with a specific cell component leading to toxicity. A probable scenario is that this conformation is necessary for the peptide to be able to form fibrils and that these fibrils or their precursors are responsible for the neurotoxigenic binding to a specific cellular receptor. Several other PrP-derived peptides are able to form fibrils showing PrP C independent neurotoxicity (23,29,56), indicating that PrP C has no indispensable role in neurotoxicity caused by peptide fibrillation. It is therefore probable that the specific PrP C -dependent neurotoxic effect of PrP106 -126 simply implicates PrP C itself as the cellular receptor and relies on a complex being formed between adequately aggregated PrP106 -126 and PrP C . This would, in turn, depend on a precisely defined conformation of PrP106 -126.
The ThT binding studies discussed here showed that PrP106 -126 easily formed amyloid fibrils in PBS, peaking after only a few days of aging. It is interesting that the fibrillation process proceeded much faster and reached much higher levels in PBS than those found previously in pure water (35). As expected, both the amidated and the RG2-amide variant of PrP106 -126 showed much less ability to form amyloid fibrils than the acid variant; however, they both reached ThT binding levels well above those of the scrambled control peptide. This was not discernible in water, where both the PrP106 -126amide and RG2-amide showed ThT fluorescence at the level of the scrambled control peptide (35).
In accordance with previously reported findings (22,23), we found that the acid form of PrP106 -126 was neurotoxic both in vivo and in vitro. The magnitude of the reduction of the ERG wave amplitudes we report here are smaller than those reported previously. This could be explained by the differences in experimental and statistical methods. Previously, the effect of the peptides was calculated by comparing the absolute ERG values before and after injection in the same eye (22,23). We found that the response of the normal ERG tended to be suppressed by repeated inductions of anesthesia, which could potentially increase the observed effect. Furthermore, the ERG was very sensitive to factors like circadian time, room temperature, and electrical noise. To correct for such variations in the absolute values of the ERG, we normalized the absolute values of the injected eyes to the control eyes. As we found that injections of isotonic saline, PBS, or the scrambled control peptide had no effect on the ERG, we do not expect this to add any bias to the experimental design.
The results in both cellular systems (CGNs and ECNs) support the in vivo results, as the unmodified, acid form of PrP106 -126 was found to show a time-and concentration-dependent toxicity. We found the toxic effect to be significant and reproducible in both a short term setup (24 and 48 h of treatment in the ECNs) and a long term setup (9 days of treatment in the CGNs).
Effect of PrP peptides on CGN viability
CGNs were isolated from rat cerebellar granules and left to differentiate for 6 days before the addition of PrP peptides (100 M) or the control compound camptothecin (20 M). After long-term incubation (9 days), cell viability was measured using a WST-1 assay. Results are given as the percentage of viability of media controls. Amidation of PrP106 -126 diminished the neurotoxic effect both in vivo and in vitro significantly. The PrP106 -126-amide did not induce significant ERG changes or apoptotic cell death in the retina after intraocular injections. Likewise, there was no effect on viability or apoptosis induction in neuronal cell cultures. In contrast, Salmona et al. (28) showed that although C-terminal amidation of PrP106 -126 increases the random coil structure and reduces its ability to induce astroglia proliferation, it did not interfere with the peptide's neurotoxic potential toward rat cortical neurons. This discrepancy could be explained by the fact that Salmona et al. added the peptides to the embryonal rat cortical neurons 1 day after plating and then treated the neurons for 7 days. We added the peptides several days after plating in both cellular systems (3-4 days for the ECN and 6 days for the CGN), and it is thus possible that the less matured neuronal cultures in the experiments of Salmona et al. were more sensitive to peptide toxicity and hence showed increased susceptibility to the amidated PrP106 -126 than our cultures. This view is furthermore supported by the fact that Salmona et al. reported larger effects on viability with the PrP106 -126 (acid variant) peptide than we did.
With regard to the structure-relaxed and amidated RG2peptide (with the addition of large basic groups C-terminally), it showed a level of toxicity more or less on the same level as that of the PrP106 -126-amide. However, there was a slight reduction of the ERG b-wave at 14 dpi in vivo (and some TUNEL-positive cells in the retina at 15 dpi). In the ECN cultures we could not detect significant cell death after 24 or 48 h of treatment with the RG2-amide peptide. In the CGN cultures, however, although we did not detect significant cell death after 9 days of treatment with the RG2-amide, we did see slight morphological changes that could not be seen with either the PrP106 -126-amide or the scrambled control peptide. We observed a moderate increase in the general caspase activity in the CGN cultures 3 days after the addition of the RG2-amide, but this was also observed with the scrambled control peptide. Caspase activation may, however, not always indicate cell death, as there are several examples of uncoupling, especially of caspase-3 activity and apoptotic cell death (46,47).
The effect of the structure relaxation due to the addition of the RG2 group could not be fully evaluated, as the amidated variant of the RG2-peptide was used here. To investigate this effect, the acid variant of the RG2 peptide should be used. We recently published a report demonstrating that the RG2 acid variant shows significantly reduced ability to form amyloid fibrils in water as assayed by ThT binding (35). We are cur-rently performing toxicity experiments with the RG2 acid variant to clarify if this reduced tendency to form amyloid fibrils correlates with reduced neurotoxicity.
The effect of the aged PrP106 -126 variant on the ERG was found to last for at least 14 days, suggesting that the observed neuronal death is not reversible. A TUNEL test performed on eyes at 15 dpi showed apoptotic nuclei in all cell layers of the retina, whereas at 4 dpi TUNEL-positive cells were mostly restricted to the ganglion cell layer (Fig. 4, B and F). Interestingly, we observed that TUNEL-positive nuclei were restricted to the outer nuclear layer in mice killed at 1 dpi or 4 dpi after injection of a fresh solution of PrP106 -126 (Fig. 4, C and D). The same was observed 1 dpi after injection of the aged PrP106 -126 (Fig. 4G). As the outer nuclear layer mostly consists of photoreceptors, this finding could indicate that photoreceptors are more susceptible to prion peptide toxicity than the other cell types of the retina (e.g. bipolar and ganglion cells). In hamsters infected with experimental scrapie, photoreceptor degeneration is the earliest and most profound retinopathological finding (48 -50). Similarly, in mice infected with the 79A strain of scrapie, apoptosis and degeneration of the outer nuclear layer were observed at the onset of clinical disease (33). Other strains of scrapie in mice likewise induce retinopathy, with the most prominent pathological changes localized to the photoreceptors (51,52).
Labeling of PrP mRNA in C57/BL6-mice was found in all cell types of the retina, although most intensively in the photoreceptor layer (33). In concordance with this finding, Chishti et al. (34) found that expression of hamster PrP C in the retina of hamster and transgenic mice was most pronounced in the photoreceptor layer. It is possible that the high expression of PrP C on the photoreceptor cells renders them more vulnerable to prion peptide and PrP Sc toxicity. Accumulation of PrP Sc in the retina seems mainly to be located in the plexiform layers (in humans with Creutzfeldt-Jakob disease (53) and scrapie-infected hamsters (54)), reflecting a synaptic accumulation. We similarly performed immunohistochemistry and a paraffin-embedded tissue blot on retinas from 263K-infected hamsters and found the accumulation to be particularly prominent in the plexiform layers. 2 ThT binding indicates formation of amyloid fibrils (38), representing the end point of a multistep fibril formation pathway. Here, we found a correlation between ThT binding and toxicity, but this does not preclude the possibility that toxicity derives from peptide-aggregate intermediates formed on the pathway to ThT binding fibrils, e.g. protofibrils, as suggested by several other studies (55). It is conceivable that the molecular variants with reduced ThT binding studied here will also have a reduced ability to form protofibrils equivalent to the reduction in fibril formation. There is, however, some controversy regarding whether PrP106 -126 binds directly to PrP C or not; Gu et al. (57) reported a direct interaction, whereas Fioriti et al. (58) found that PrP106 -126 exerted its effect without interacting directly with PrP C . A precisely defined conformation and a direct interaction with PrP C are both hallmarks of PrP Sc neurotoxicity (15,43).
In conclusion, we found that the PrP C -dependent neurotoxicity of the PrP106 -126 peptide was closely related to its ability to form fibrillar structures in aqueous solution, as structural changes interfering with this ability abolished the neurotoxicity of the peptide. This was found to be significant both in vivo and in two different primary neuronal systems. We are currently investigating how other small molecular changes, similarly shown to interfere with the fibrillation process (e.g. oxi-2 A.-L. Bergström, H. Laursen, and P. Lind, unpublished results. dation), will affect the PrP C -dependent neurotoxicity of the PrP106 -126 peptide.
|
v3-fos-license
|
2022-01-31T16:06:43.558Z
|
2022-01-29T00:00:00.000
|
246419409
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10708-022-10589-5.pdf",
"pdf_hash": "8a7e994d5fead533438de78f370d412d7d06d82a",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46530",
"s2fieldsofstudy": [
"Geography",
"Environmental Science"
],
"sha1": "57cdbf79d7737a0084eeb5f04816354574259553",
"year": 2022
}
|
pes2o/s2orc
|
Exploring the role of compactness in path-dependent land-taking processes in Italy
Land take, namely the conversion of natural land into impervious surfaces, is partly driven by path dependency, whereby dispersed settlements tend to spread more than compact ones over time. Yet there is limited knowledge about the extent to which specific aspects of compactness are associated with land take: a link that is instead crucial to formulate effective policies. This study investigates the impact of density, centrality, contiguity and degree of imperviousness by regressing land take data from 100 Italian NUTS3 administrative units for the period 2006–2012 against measures of the above-mentioned aspects as of 2006. Results indicate that higher shares of people in the 2000–2500 people km−2 density class, greater proximity of the population to urban centres, more contiguous urbanization patterns all help contain land take over time, whereas no significant effect was found for imperviousness. Increasing distance from protected areas reduces the positive effect of having more people live at densities of 2000–3000 people km−2, while steeper slopes enhance such effect. Planning interventions aimed at raising the share of people living at densities of 2000–2500 people km−2 as well as improving the degree of centrality or contiguity of urbanization patterns can lead to a decline in land take (measured as area of new land take per unit area of current land take) over a 6-year time span comprised between around 6 and 35% depending on location. Further research is needed to confirm the validity of our results and explore the feasibility of such interventions.
Introduction
Land take, namely the conversion of natural, seminatural and agricultural land into artificial surfaces (EEA, 2006), proceeds at alarming rates worldwide (UN, 2017a). In the European Union (EU) (including UK), for example, approximately 500 km 2 of natural land (an area the size of Budapest, Hungary) have been lost every year between 2012 and 2018, and the extent of artificial land has increased by 6.7% between 2000 and 2018 compared to a 5% population growth over the same period (EEA, 2020a). In the United States, the urban land area rose by 17% between 2002 and 2012 (from about 243,000 km 2 to 283,000 km 2 ), almost twice as fast as population (Bigelow & Borchers, 2017). In the previous decade, between 1990 and 2000, 14,000 km 2 of open space Abstract Land take, namely the conversion of natural land into impervious surfaces, is partly driven by path dependency, whereby dispersed settlements tend to spread more than compact ones over time. Yet there is limited knowledge about the extent to which specific aspects of compactness are associated with land take: a link that is instead crucial to formulate effective policies. This study investigates the impact of density, centrality, contiguity and degree of imperviousness by regressing land take data from 100 Italian NUTS3 administrative units for the period 2006-2012 against measures of the above-mentioned aspects as of 2006. Results indicate that higher shares of people in the 2000-2500 people km −2 density class, greater proximity of the population to urban centres, more contiguous urbanization patterns all help contain land take over time, whereas no significant effect was found for imperviousness. Increasing distance from protected areas reduces the positive effect of having more people live at densities of 2000-3000 people km −2 , while steeper slopes enhance such effect. Planning interventions aimed at raising the share of had been lost within the 274 metropolitan areas of the lower 48 states (McDonald et al., 2010). Such trends constitute a serious threat to human well-being for they involve the consumption of limited natural resources-land and soil-that are key to the supply of crucial ecosystem services, including food production, carbon storage, nutrient cycling, habitat provision, water purification and flood mitigation (Dupras et al., 2016;Eigenbrod et al., 2011;Lorenz & Lal, 2009;Pouyat et al., 2002;Sun et al., 2018;Suriya & Mudgal, 2012).
The urgency of the problem has been formally acknowledged by the United Nations, which defined an ad hoc indicator-ratio of land consumption rate to population growth rate (11.3.1)-to monitor progresses toward the achievement of sustainable development goal 11 ("Make cities and human settlements inclusive, safe, resilient and sustainable") in the framework of the 2030 Agenda for sustainable development (UN, 2017b). On a strategic level, the European Commission (EC) has proposed to have policies in place by 2020 that aim to bring net land take down to zero by 2050 (EC, 2011). At the core of these policies, besides the idea of recycling areas that were once used and are now inactive, and that of compensating new constructions on natural land through renaturation of unused built-up areas, is the commitment to minimize new developments on unbuilt open space or agricultural areas (Science for Environment Policy, 2016). In order to design such policies, however, administrators and planners should not simply be able to assess the extent to which new developments can reasonably be made sustainable through infilling and renaturation, but they should also have a clear understanding of how current patterns of development may stimulate or restrain land take in the future. This is to avoid the promotion of forms of development that, while adding little to the current extent of artificial surfaces, may induce unintended expansions of such surfaces in the years to come owing to inertia in urbanization processes.
Over the last three decades, through both theoretical and empirical studies, scholars have acquired a deep knowledge on the drivers of land take (Colsaet et al., 2018). There is significant evidence, for example, that increasing levels of population growth (Deng et al., 2008;Marshall, 2007), income (Deng et al., 2008;Kuang et al., 2014;Weilenmann et al., 2017), proximity to transportation infrastructures (Müller et al., 2010;Tian & Wu, 2015), road density (Guastella et al., 2017;Oueslati et al., 2015) and administrative fragmentation (Carruthers, 2003;Wassmer, 2006) stimulate land take. Conversely, higher fuel prices (Ortuño-Padilla & Fernandez-Aracil, 2013), more protected areas (Irwin & Bockstael, 2004;Zoppi & Lai, 2014), urban growth boundaries (Wassmer, 2006) and steeper terrains (Christensen & McCord, 2016;Deng et al., 2010;Müller et al., 2010) have been proven to limit land take. A relatively limited body of research has also explored path-dependent processes, namely dynamics by which past development affects future land take, mostly showing that higher density and greater compactness (i.e. less sprawl) today tend to foster comparably denser developments and less land take tomorrow (Burchfield et al., 2006;Paulsen, 2014;Siedentop & Fina, 2012).Yet there is a substantial lack of information about the extent to which specific aspects of compactness may affect future land take. This is in fact highly relevant in the light of the multi-faceted nature of the compactness concept, which encompasses such diverse variables as population density, development contiguity, land use mix, etc. (Neuman, 2005).
First, previous studies have generally considered average population density over a territory (Zoppi & Lai, 2014), but not the distribution of the population across different density classes, although the share of people living at low to medium urban densities is a very good indicator of sprawl in a region (Laidley, 2016;Lopez & Hynes, 2003;Zambon & Salvati, 2020). Second, little is known about whether the degree of centrality (i.e. the proximity of development to a central location such as a central business district or a major city), an important measure of compactness (Cutsinger et al., 2005;Galster et al., 2001;Kaza, 2020;Orsi, 2019), may trigger future land take. Third, while compact settlements are commonly characterized by contiguous development and a clear boundary between built-up and natural land (Heimlich & Anderson, 2001;Neuman, 2005), there is no information about the effect of those elements on future land take. Fourth, previous studies have not investigated path-dependent processes related to the degree of imperviousness of built-up areas, a variable that is associated with soil sealing (Salvati, 2016) and therefore the ability of land to actually deliver ecosystem services (Haase & Nuissl, 2007).
This study aims to address the above research gaps by answering three research questions. Do the four above-mentioned aspects of compactness (population distribution, centrality, contiguity, imperviousness) at a given point in time have a significant association with land take over an upcoming period? Is the impact of such aspects on land take moderated by other determinants of land take? What is the potential for planning interventions targeting these aspects to actually contain land take in the future?
In order to answer these questions, land take occurring between 2006 and 2012 in Italian NUTS3 administrative units (relative to land take as of 2006) was regressed against different variables describing the four above-mentioned aspects of built-up areas as of 2006 while controlling for other determinants of land take. The study was conducted in Italy given the magnitude of land-taking processes the country has been experiencing over the last decades and the relevance of the topic in the national debate both at a policy (Munafo', 2020) and scientific (Munafo'et al., 2013;Pileri & Maggi, 2010;Zoppi & Lai, 2014) level. The focus on NUTS3 administrative units (equivalent to Italian provinces, namely administrative units including several municipalities), rather than single municipalities or cities, was meant to capture possible shifts in urban development from major to minor settlements within the same administrative and economic territory.
Study area
Italy has a total area of slightly over 301,000 km 2 , of which 7% (21,400 km 2 ) is currently covered by impervious surfaces (Munafo', 2020). Given a population of roughly 60 million, the amount of artificial land cover per capita is around 355 m 2 , which is in line with the EU average (363 m 2 ) (Eurostat, 2020a). In terms of recent land take, Italy has added 954 km 2 (16 m 2 per capita) of artificial surfaces in the 2000-2018 period (i.e. roughly a 5% increase compared to 2000 levels), although most of these in the 2000-2006 period (494 km 2 or 8.2 m 2 per capita) and the 2006-2012 period (356 km 2 or 5.9 m 2 per capita) (EEA, 2020b) (please note that land take data from EEA and Eurostat may not be perfectly comparable to data from national institutes, e.g. Munafo', 2020). While such growth is much lower than that of, among others, Spain (53 m 2 ), the Netherlands (38 m 2 ), France (32 m 2 ) and the EU (28 m 2 ), it is comparable to that of Germany and the United Kingdom (16 m 2 for both), and higher than that of Belgium (9 m 2 ) (EEA, 2020b).
Recent estimates show that net land take (i.e. new artificial surfaces minus renaturation of previously artificial land cover) in Italy is now progressing at 14.2 ha per day, slightly higher than what it used to be in 2015 (Munafo', 2020). Worryingly enough, most of this increase in artificial land cover (up to 90%) is taking place in areas that would be highly suitable for agricultural practices (i.e. not too steep, outside protected areas, at low flooding and landslide risk), therefore jeopardizing food security of the local population (Gardi et al., 2015;Munafo', 2020).
Both land take per capita and land take increment vary considerably within Italy. The former is inversely related to population density, hence taking on low values in such dense regions as Lombardia (286 m 2 ), Lazio (235 m 2 ) and Campania (240 m 2 ), and high values in such scarcely populated regions as Friuli-Venezia-Giulia (519 m 2 ), Umbria (501 m 2 ) and Basilicata (554 m 2 ) (Munafo', 2020). The latter is particularly strong in the most economically dynamic part of the country (Lombardia, Veneto and Emilia-Romagna in the North), around Rome and Naples, and in Puglia and Sicily in the South (Munafo', 2020).
This research is conducted on NUTS3 administrative units, which are relatively small territories (mostly between 2000 and 4000 km 2 ), one administrative level below the above-mentioned regions and corresponding to Provinces. The decision to pick this administrative level to obtain the statistical units for the study was driven by the goal of having a sufficiently high number of relatively uniform units that include both urbanized areas and countryside, therefore guaranteeing statistical robustness and a full appreciation of urban expansion phenomena. In fact, regions (NUTS2 administrative level) would be too few (20) and large, whereas municipalities are extremely heterogenous, with some being predominantly urbanized and some predominantly rural.
Data
The extent and spatial pattern of land take was extracted from the 2006 and 2012 Copernicus highresolution imperviousness density datasets, which report the degree of imperviousness (%) at 20-m resolution for the whole Europe (https:// land. coper nicus. eu/). These datasets were also used to compute the degree of imperviousness and to assess the shape of built-up areas. The boundaries of the NUTS3 administrative units were obtained from Eurostat in shapefile format (Eurostat, 2020b). Basic demographic and socioeconomic data such as population and GDP per NUTS3 unit were also available on the Eurostat portal (Eurostat, 2020c(Eurostat, , 2020d. Information about population distribution was extracted from the GEOSTAT 2006 population-grid dataset, which reports the population census in a 1 km 2 grid format. The identification of urban areas was based on the 2006 degree of urbanisation classification for Europe and the associated spatial datasets (Dijkstra & Poelman, 2014). The road network was assembled by combining primary and secondary roads from the Global Roads Open Access Data Set (gROADS, v1 1980(gROADS, v1 -2010 (CIESIN & ITOS, 2013), with tertiary roads extracted from OpenStreetMap. The presence and shape of protected areas were derived from the shapefile of Natura 2000 sites, which is produced and updated by the European Environment Agency. Finally, the EU-DEM version 1.1 at 25-m resolution produced by the Copernicus Land Monitoring Service (https:// land. coper nicus. eu/) was used to compute slope.
Variables
The dependent variable is relative land take between 2006 and 2012, intended as m 2 of artificial surfaces added in the 2006-2012 period per hectare of artificial surfaces existing in 2006. In fact, the natural logarithm of this is considered as a way to get a quasi-normally distributed variable, enhance the linear relation with the independent variables and reduce potential issues of heteroskedasticity. The extension of land take in 2006 was estimated by considering land parcels whose degree of imperviousness (ranging between 0 and 100%) in that year was higher than zero as specified in the relevant spatial dataset (Fig. 1). The area of land taken between 2006 and 2012 was computed by identifying parcels of land that had a degree of imperviousness equal to zero in 2006 and higher than zero in 2012 ( Fig. 1). Renaturation processes were not considered in the analysis, hence land parcels whose degree of imperviousness dropped from above zero in 2006 to zero in 2012 were simply disregarded.
All independent variables were computed as of 2006 given the explicit objective of providing knowledge about the extent to which urbanization patterns in a given year may affect land take over the following years. The sole exceptions to that are variables related to population and income change, which were considered as control variables (and computed as differences between 2006 and 2012 values) assuming that, when conducting predictive studies, practitioners could always extrapolate these from reliable future projections.
The effect of population distribution on future land take was tested using multiple variables reporting the shares of NUTS3 regions' population living at low to medium urban densities. In particular, the following ranges were considered: 300-500, 500-1000, 1000-1500, 1500-2000, 2000-2500 and 2500-3000 people km −2 .
Centrality was measured as the average travel time (minutes) of a NUTS3 region's inhabitants to the closest urban area, as defined by the degree of urbanisation classification: namely, a cluster of contiguous grid cells of 1 km 2 with a population above 5000 and a density of at least 300 people km −2 (Dijkstra & Poelman, 2014). Travel time from any 1 km 2 grid cell to the nearest urban area was computed through a cost distance operation assuming speeds of 120 km h −1 , 60 km h −1 , 40 km h −1 on primary (highways), secondary and tertiary roads, respectively. Average travel time was obtained as the summation of the products of travel time and population on a cell-by-cell basis divided by the overall population of a NUTS3 region, as follows: where tt i is travel time from grid cell i to the nearest urban area, p i is the population of cell i and P is the overall population of the NUTS3 administrative unit.
The contiguity of development and the blurriness of the interface between built-up and natural land were measured as the average number of builtup cells within a radius of 50 m around each built-up cell in a NUTS3 region. This was done by: reclassifying cells of the 2006 imperviousness map as either 1 (imperviousness greater than zero) or 0 (imperviousness equal to zero); running a neighborhood operation to compute the summation of cell values within a 100 m x 100 m (5 × 5 cells) around each cell; and calculating the average value across all built-up cells within a NUTS3 administrative unit. As shown in Fig. 2, the variable decreases whenever development gets more scattered and/or the boundary of the builtup area becomes less definite.
The degree of imperviousness was simply measured as the median, within a NUTS3 region, of the imperviousness values of cells whose imperviousness is greater than zero. The median was chosen in lieu of the mean because it is less sensitive to extremes, therefore conveying more consistent information for areas characterized by relatively uniform imperviousness levels (e.g. a majority of high-imperviousness cells and few low-imperviousness cells).
Two common path dependency variables-population density and land take per capita-were included to have a reference against which to compare the explanatory power of the variables selected to measure the four characteristics presented above.
Finally, several variables were considered to control for well-known drivers of land take as suggested in literature. Among demographic and socioeconomic variables, we considered population in 2006, population change (%) between 2006 and 2012, GDP per capita in 2006 and GDP per capita change (%) between 2006 and 2012. In order to account for administrative fragmentation and governance conditions, the number of municipalities per km 2 and the average distance of non-built-up land parcels (as of 2006) to protected areas were considered. The average slope of non-built-up land parcels (as of 2006) was included to control for the impact of topography on land take. Finally, three binary dummy variables were included to account for intrinsic differences between Italian geographical macro areas NUTS3 administrative units belong to: North (including regions Valle d'Aosta, Piemonte, Lombardia, Liguria, Veneto, Trentino-Südtirol, Friuli-Venezia-Giulia, Emilia-Romagna), Center (including regions Toscana, Umbria, Marche, Lazio) and South (including regions Abruzzo, Molise, Campania, Puglia, Basilicata, Calabria, Sicilia). Given its unique location (i.e. far from mainland and not easily associable with either the Center or the South), the island of Sardegna was assumed to constitute a macro area of its own, to which no dummy variable was assigned (i.e. it was considered as base level for the other three dummy variables).
The actual number of statistical units included in the analysis (100) is lower than the number of Italian NUTS3 administrative units (110): that is because some units were removed for an excess of unclassified pixels in the 2006 imperviousness layer (Avellino, Biella, Cuneo, Imperia, Potenza, Savona), while others were removed for they present overly high density levels and a predominantly urbanized territory compared to the average (Milano, Monza-Brianza, Napoli, Trieste). Descriptive statistics of both dependent and independent variables, and Pearson coefficients of pairwise correlations between the dependent variable and all independent variables are presented in Tables 1 and 2, respectively. Raw values of the dependent variable (i.e. before log-transformation) for all NUTS3 administrative units considered in the analysis are shown in Fig. 3.
Statistical analyses
The first research question (Do the four aspects of compactness at a given point in time have a significant association with land take over an upcoming period?) was answered by regressing land take Fig. 2 Measures of contiguity for four progressively less contiguous (from a to d) urbanization patterns. Contiguity was measured as the average number of builtup cells within a 5 × 5 filter around each built-up cell between 2006 and 2012 against variables describing the four aspects and the control variables presented above. This was done by first developing a basic minimal model made up of a few significant and uncorrelated variables from the set of control variables presented in Sect. 3.2, and then individually testing the significance and contribution to the basic model of the variables describing the four aspects. This formally corresponds to testing the following alternative hypothesis, against the null hypothesis, where β j is the regression parameter associated with variable j. As parameters were estimated using the Ordinary Least Squares (OLS) method, the OLS assumptions were verified. Among other things, the basic model was tested for spatial autocorrelation by computing the global Moran's I of the residuals according to the following formula: where n is the number of features, z i is the deviation of the residual for feature i from the mean, w i,j is the spatial weight between feature i and j, and S 0 is as follows: The second research question (Is the impact of the four aspects of compactness on land take moderated by other determinants of land take?) was answered by adding to the previously developed regression models interaction terms between each of the aspects of compactness and the determinants of land take included in the basic model. Before the analysis, continuous predictors were mean-centered to avoid issues of multicollinearity.
The third research question (What is the potential for planning interventions targeting these aspects to actually contain land take in the future?) was answered by using the previously estimated regression coefficients of the statistically significant aspects of compactness to calculate, for each administrative unit, the decline in land take that would be associated with "reasonable" improvements in such aspects, similarly to what was done by Zoppi and Lai (2015). For a given administrative unit, the "reasonable" improvements were intended as land take restraining variations of the variables describing the above-mentioned aspects (i.e. increases or decreases depending on whether a variable is negatively or positively associated with land take) that are quantitatively compatible with levels of the same variables in nearby administrative units. This is based on the assumption that nearby administrative units present similar geographical and spatial planning characteristics,
Results
Three predictors were eventually retained as control variables in the basic regression model used to test the effect of the four aspects of compactness on land take: distance from Natura 2000 areas, slope, south (Table 3). These cover the main categories of land take's determinants: regulation (i.e. development restrictions), environmental features and the economy (Southern regions showing, on average, lower levels of per capita income and economic growth than Central and Northern regions), respectively. The signs of regression coefficients associated with these variables are as expected: positive for the distance from Natura 2000 areas (the farther from protected areas the more land take), negative for the other two (land take is less where terrain is steeper and in Southern regions). Given the log-transformed dependent variable, regression coefficients are interpreted as elasticities or semi-elasticities depending on whether independent variables are also log-transformed or not. All else being equal, a 1% increase in the average distance of pervious land parcels from Natura 2000 areas is roughly associated with a 0. Table 3) tells the model explains 28.5% of the variance of the dependent variable and sets the standard against which to compare the explanatory power of variables selected to describe the four aspects of compactness.
Among variables chosen to estimate the effect of population distribution across density classes, only the share of people living at densities comprised between 2000 and 2500 people km −2 has a significant effect (column 7 in Table 3), whereby a 1% increase of such share is associated with a 5% decline in land take over the next 6 years compared to initial period levels. The adjusted R 2 of this model (0.275) suggests the explanatory power of the share of people living in this density class is lower than that of an administrative unit's average population density. Although not significant, the signs of regression coefficients assigned to the other density classes disclose something interesting and meaningful: larger shares of people living at lower density levels (300-1500 people km −2 ) in 2006 are associated with more land take over the next six years, whereas larger shares of people living at higher density levels (1500-3000 people km −2 ) are associated with less land take.
The coefficient associated with the centrality variable is significant and has a positive sign (column 9 in Table 3), telling that every extra minute of average travel time to the nearest town in 2006 pushes the number of m 2 of land take in the 2006-2012 period per hectare of artificial surface in 2006 up 3%. The adjusted R 2 in this case (0.280) reveals an explanatory power nearly equal to that of average population density.
The coefficient associated with the contiguity variable is significant and negative (column 10 in Table 3), suggesting that a one-point increase in the average number of built-up cells within a 50-m radius around each built-up cell (i.e. a more contiguous fabric) is associated with a 12% decline in the amount of m 2 of land take in the 2006-2012 period per hectare of land take as of 2006. The adjusted R 2 for this model (0.293) suggests the explanatory power of this variable is higher than that of average population density.
Variable
(1) (3) (8) The coefficient associated with the imperviousness variable is not significant (column 11 in Table 3), preventing any consideration on the link between the current degree of imperviousness and future land take.
The analysis of interaction effects shows that slope and distance from Natura 2000 areas respectively moderate the impact of the shares of people in the 2000-2500 and 2500-3000 people km −2 on land take, whereas membership to a Southern region moderates the impact of imperviousness on land take (Table 4). An extra degree of average slope (e.g. a province whose pervious surfaces have an average slope of five degrees compared to one where average slope is four degrees) reduces the land take associated with one percent more people living in the 2000-2500 people km −2 density class by about 1%. A one percent increment in the average distance from Natura 2000 areas increases the land take associated with one percent more people living in the 2500-3000 people km −2 density class by about 0.12%. Being in a Southern region increases the land take associated with one extra point of median imperviousness by about 5%. As the regression coefficient of the imperviousness variable in this case is nearly significant (p < 0.10) and negative (−0.024), we can say that being in a Southern region reverses the effect of imperviousness on land take, whereby higher imperviousness levels stimulate, rather than restrain, land take.
The percent decline in m 2 of additional land take per ha of current land take associated with improvements in the share of people living in the 2000-2500 people km −2 density class, the average travel time to towns and the contiguity of urban fabric (imperviousness was not considered because its effect on land take was found insignificant in the statistical analysis) is presented in Fig. 4, which also shows the supposedly most effective intervention for each NUTS3 administrative unit. In general, all interventions on the three aspects above might guarantee declines in land take over a 6-year horizon (compared to current levels of land take) that are comprised between about 6% and 35%. Assuming an expected addition of impervious surfaces over the 6-year period of 300 m 2 per hectare of land take at the start of the period, these interventions could then bring this figure down to values ranging between 282 and 195 m 2 per hectare (i.e. a reduction between 18 and 105 m 2 per hectare). Interventions on densification seem preferable in the North (particularly the area around Milan), interventions on centralization may be more effective in Toscana, some of the South (particularly Puglia and Northern Calabria) and most of the island of Sardegna, whereas interventions improving contiguity might be best in the Northwest and Northeast, most of the central part of the country and the island of Sicilia.
Discussion and conclusion
This study expands our knowledge on the role of compactness in path-dependent land-taking processes by showing that in Italian provinces population distribution across density classes, the spatial distribution of people over the landscape and the degree of contiguity of the urban fabric today are significant predictors of future land take. In particular, greater compactness today, in terms of higher shares of people living at medium densities, more clustering of the population around towns and cities, and a more contiguous urban fabric, help contain land take in the future.
Although six density classes were considered to fully describe population distribution from very low to medium urban densities, only the share of people in the 2000-2500 people km −2 class proved significant. This may be due to the fact that a considerable proportion of people living at such density levels generally implies large extents of a kind of urban environment (i.e. mid-rise buildings, moderate land use mix, proximity to services) that is unlikely to stimulate much expansion of impervious surfaces for the creation of dedicated transportation or commercial infrastructures. As a way of containing future land take, planners may then aim to promote these contexts, which can strike a decent balance between density and livability . The signs of regression coefficients of the shares of people in the other density classes, though insignificant, suggest the existence of a density threshold at around 1500 people km −2 , such that increasing the share of people who live at densities below this stimulates future land take. In many Italian settlements, this threshold marks a rather clear separation between the urban core (including not just historical city centres, but also many surrounding post-war developments) and the outer fringes, which are more fragmented and directly in contact with the countryside, and therefore more prone to further expansions. This is also consistent with the European Commission's harmonised definition of cities and rural areas, going under the name of "new degree of urbanisation" (Dijkstra & Poelman, 2014), which sets exactly at 1500 people km −2 the density level above which are city centres. The positive association between the centrality variable and land take affords some considerations on both the ratio of urban to rural dwellers and the accessibility of urban centres. On the one hand, assuming the utilized definition of urban area is appropriate (an area of at least 5000 people living at densities above 300 people km −2 ) (Dijkstra & Poelman, 2014), the more people live in a rural context, and hence at some distance from an urban area, the more land take we can expect in the future. While this is in conflict with studies detecting a positive correlation between increases in urban population and land take (Angel et al., 2011;Zhang & Su, 2016), it is perfectly consistent with the idea that smaller fragments of built-up land may become the seeds of future developments. This is in fact what happens in Italy, where spontaneous developments around historical structures in rural settings, also called "sprinkling" (Romano et al., 2017a), are well-known and may have worse consequences than traditional sprawl (Romano et al., 2017b). On the other hand, the farther people live from urban centres, in terms of time needed to get there, the more land take we can expect in the future. This is also counterintuitive in the light of location theory (Alonso, 1964) and the notorious positive association between land take and accessibility (Braimoh & Onishi, 2007), yet it is consistent with the idea that isolated communities expand through Regression models (1) to (6) consider the share of people in different density classes Regression coefficients of models considering main and interaction effects (standard errors are in parentheses) more land-intensive developments given lower land prices. Moreover, the fact that the variable adopted to measure centrality (per capita travel time to nearest urban centre) is poorly correlated with the size of a NUTS3 administrative unit (ρ = 0.22), suggests these considerations hold true no matter the context. From a planning perspective, it is then important to make sure that most new developments take place in and around cities and that small isolated communities do not grow excessively. The coefficient of the contiguity variable complements findings by Burchfield et al. (2006): it is not just development on unincorporated land (i.e. areas beyond the municipal boundary, where planning regulations are lacking or weaker) that induces more sprawl and therefore land take, but also just poorly contiguous development. Considering how the variable was measured (i.e. 100 m x 100 m filter detecting the amount of built-up cells around each built-up cell), we can say that even micro-scale lack of contiguity, such as a rough city boundary, might induce a considerable expansion of impervious surfaces over time. As lack of contiguity is generally the outcome of weak spatial planning, our findings are consistent with a vast literature pointing to a negative association between regulation and land take (Burchfield et al., 2006;Huang et al., 2009;Nuissl & Schroeter-Schlaack, 2009;Wang et al., 2017). From a policy perspective, this is a call for a strong and unitary spatial planning that prevents voids and leapfrogging as these can then stimulate land-intensive ancillary developments.
Results suggest the explanatory power of the above-mentioned variables is comparable or even higher than that of average population density, a common predictor of path-dependent land-taking processes. We found population density to be inversely related to land take, in line with previous studies considering the number of people per area of sealed land (Siedentop & Fina, 2012) or housing density (McDonald et al., 2010;Paulsen, 2014), but in conflict with findings by Zoppi and Lai (2014) and Zoppi and Lai (2015), who considered average population density too. This may be due to the choice of the dependent variable, which in the latter studies was the percentage of an administrative unit's area changing from non-artificial to artificial status, whereas in this study was the additional land take over the monitored period as a proportion of land take at the start of the period.
The significance of the selected compactnessrelated variables was tested in a model that controlled for basic determinants of land take including regulation (distance from Natura 2000 areas), geography (slope) and the economy (South). Regression coefficients for these predictors are consistent with the literature, whereby the distance from protected areas is positively (or the extent of protected areas is negatively) associated with land take (Zoppi & Lai, 2014), slope is negatively associated with land take (Christensen & McCord, 2016;Zoppi & Lai, 2014) and income (which is lower in the South) is positively associated with land take (Angel et al., 2011;Deng et al., 2008Deng et al., , 2010. In fact, while the binary variable South can account for the major economic gap between the North-Center and the South, it may have also captured a wider array of aspects varying significantly between these two parts of the country, including the density of transport infrastructures, the extent of industrial areas and the impact of planning regulations. Purely monetary variables, such as GDP per capita and growth of GDP per capita, proved either poorly significant or badly affected by the 2008 financial crisis, and were therefore deemed unsuitable for the basic model. Results of the analysis of interaction effects conducted to answer the second research question, though limited, have nonetheless some relevant planning implications. Increasing distances from, or a smaller extension of, protected areas may reduce the land take restraining power of having a higher share of people in a medium to high density class, whereas steeper slopes can enhance such power. This highlights the importance of establishing protected areas particularly in flat territories as a way to ensure housing policies aimed at increasing the share of people in medium to high density classes can actually deliver their calming effect on land-taking processes over time. The location of an administrative unit seems to have an effect on the impact of the median level of imperviousness, with land-taking processes being slowed down by higher degrees of imperviousness in the North-Center and being accelerated in the South. This might reflect more land take around large extensions of low-imperviousness peri-urban areas (e.g. low-density suburbs, business parks) in the North-Center and around completely filled (and therefore highly impervious) urban areas in the South.
Results of the analysis conducted to answer the third research question, though purely theoretical, show that intervening now on population distribution across density classes, centrality and contiguity may allow a significant containment of land-taking processes in the future. A 6% land take reduction, the minimum any of these measures might achieve, implies that an average province, with 12,000 hectares of land take at the start of the period and an expected 300 m 2 of extra land take per hectare of initial land take, could spare 21.6 hectares (or more than a fifth of a km 2 ) of natural land over the next 6 years. Under the most optimistic scenario, figures might be 6 times higher. The magnitude of these land take reductions depends on the combination of measure adopted (e.g. increasing the share of people living in a mid-density context vs. enhancing centrality) and improvement that is possible in a given province. We assumed the latter to be approximated by the performance of neighboring administrative units, though this may over-or underestimate the achievable improvement in case one or more of the neighbors present peculiar historical, social, governmental or geographical conditions that have allowed particularly high or low levels of the variables describing the three aspects. These results have to be considered merely theoretical for another couple of reasons though. First, the proposed interventions on density, centrality and contiguity may prove impossible in a given province for practical reasons (e.g. lack of funding, planning regulations, market conditions, people's preferences). Second, the elasticity values measured through the OLS model hold true over a limited range around the mean, while they may be rather biased for marked improvements on any of the three aspects.
In addition to those already mentioned in previous paragraphs, this study has further limitations that are due to specific methodological choices. Among empirical studies on the determinants of land take and urban sprawl, this is in fact one of very few (Burchfield et al., 2006;Paulsen, 2014) to regress the extent of land take or the degree of sprawl at the end of a period against urbanization patterns at the start of the period as a way to capture path-dependent processes. The fact that variables chosen to measure the four aspects of compactness were computed based solely on conditions as of 2006 (i.e. no consideration of where development took place between 2006 and 2012), though contributing to making our findings "policy-ready" (i.e. knowing current urbanization patterns, policy-makers can use our estimates to predict their impact on future land take), may have reduced the goodnessof-fit of the model.
As to the above-mentioned four variables, each of them has some weakness. The effect of population distribution was investigated using a set of classes that may not perfectly reflect different living environments throughout the country. Centrality only accounted for access to cities at large, while largely disregarding attraction flows to the main services, which may be decentralized but still foster land take. Contiguity was measured through an ad hoc variable that is based on an assessment of the land use pattern within a 50-m radius, although fragmentation may be better detected using other radii. Imperviousness was summarized using a statistic (i.e. median) that can hardly capture how it actually varies throughout a NUTS3 region.
Another important methodological decision behind this study regards statistical units, which are not cities or metropolitan areas (Burchfield et al., 2006;Paulsen, 2014), but small regions encompassing urban and rural areas as well as uninhabited territories. While this decision allowed us to (at least quantitatively) capture shifts of development between city and countryside, it has certainly made the interpretation of results difficult (i.e. harder to speculate where certain urbanization patterns encourage further development). As a side note, the use of these larger statistical units also forced us to work with a relatively small sample size (n = 100): something that may have reduced the number of significant parameters obtained, but that certainly gives even more credibility to the significant ones. In fact, the sample size was smaller than anticipated because some statistical units had to be eliminated for either lack of data or their outlier nature. While units eliminated for the former reason are a minor issue, those eliminated for the latter reason may have affected the results because they represent some of the most economically dynamic areas in Italy (therefore likely experiencing strong increases in land take). Their elimination was unavoidable, however, because the minimal difference between urbanized area and overall area characterizing them (and somehow fostering more compact development) puts them on an entirely different level compared to all others.
Being almost entirely based on European-level data, the methodology adopted in this study is easily replicable in any other European country, although context-specific considerations may be needed for the definition of ad hoc control variables (e.g. binary variables defining different geographical regions) and the interpretation of results. Application of the method in other contexts instead may be thwarted by lack of high-resolution land cover data, which would make the estimation of land take and the assessment of contiguity and imperviousness largely inaccurate.
The findings of this study corroborate the notion that, when it comes to land take and urban sprawl, past development affects future conditions. Having quantified the impact of density, centrality and contiguity on the future expansion of impervious surfaces, it gives policy-makers some reference about which land take reductions they can reasonably expect from intervening on each of these aspects. Further research is needed to confirm the validity of our results and to figure out the feasibility of interventions on the aspects above.
|
v3-fos-license
|
2019-03-16T13:13:38.236Z
|
2017-03-29T00:00:00.000
|
42179080
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.5772/66933",
"pdf_hash": "1d3fe2e10ade819d0d298f981f3f315438cba845",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46533",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6ca6915084f7d1aa585539e7683e633de3eb76e3",
"year": 2017
}
|
pes2o/s2orc
|
Ototoxicity: Old and New Foes Ototoxicity: Old and New Foes
Drug-induced ototoxicity has been known for centuries. Already in the seventeenth century, hearing loss was described to be a side effect of quinine. The post- World War II pharmaceutical industry boomed with the production of aminoglycoside antibiotics followed by diuretics and cytostatic drugs. Wide-spread and long-term usage of these medications brought the knowledge about their unwanted ototoxic effects. In the last decades, several new drugs appeared on the shelves of pharmacies and the hearing loss or tinnitus have been among the side effects of many of them. However, the awareness of community about new ototoxic medications is still not sufficient. New ototoxic drugs may belong to the class of phosphodiesterase-5 (PDE5) inhibitors, used to improve microcirculation and to treat erectile dysfunction. Moreover, interferons used for the therapy of hepatitis B and C, common painkiller paracetamol and hydrocodone, syn - thetic opioid methadone and the inhibitors of reverse transcriptase were demonstrated to induce adverse effects on hearing. Lastly, hearing loss linked to immunosuppres - sive drugs was documented in patients undergoing organ transplantation. Making the patients aware of adverse drug reactions and offering them audiological monitoring and intervention should be considered by respective therapists.
Introduction
The sense of hearing is fundamental to the communication and proper reaction to dangerous situations. Moreover, recent studies indicated that the hearing loss increases significantly the risk of dementia [1]. Unfortunately, people's ability to hear deteriorates with time, as the human auditory epithelium is post-mitotic and unable to regenerate. In other words, the few thousand of auditory hair cells with which we are born have to last our entire life. There are several causes of hearing loss such as noise, aging, infections, tumors, neuronal degeneration or cardiovascular diseases. Another important cause of hearing loss is ototoxicity.
In this chapter, we will concentrate on medications that are known to induce hearing loss as an adverse effect. These medications are also known as ototoxic medications.
Clinical signs of ototoxicity may include at least one of the following symptoms: • tinnitus • hearing loss (unilateral or bilateral) • vertigo.
First signs of ototoxicity usually develop during or shortly after receiving particular medication. Majority of ototoxic drugs induces irreparable damage translating into permanent hearing loss; however, aspirin and derivatives belong to drugs that cause most of the times reversible hearing loss [2]. In fact, aspirin-induced ototoxicity in form of tinnitus was used for decades by rheumatologists to adjust the maximal therapeutic dose of salicylates in the patients. This practice was abandoned because of poor correlation between salicylate blood levels and ototoxicity symptoms [3] and because of development of new drugs used for the treatment of rheumatic diseases. Nevertheless, even today there are patients occasionally admitted to the emergency room because of the salicylate-induced ototoxicity [4]. The ototoxicity of salicylate has been attributed to its capacity to bind and inhibit the action of cochlear protein prestin, expressed by the outer hair cells [5,6]. In addition, salicylate can induce death of spiral ganglion neurons as well as cause dysregulation in the central auditory pathway [7].
Other groups of well-known ototoxic drugs that frequently cause hearing loss include: • platinum-based cytostatic drugs • aminoglycoside antibiotics • loop diuretics Platinum-based cytostatics (cisplatin, carboplatin and oxaliplatin) are used as single agents and in combination with other drugs for the treatment of various types of cancer (e.g., testicular carcinoma, lung carcinoma, ovarian carcinoma, head and neck carcinomas, melanomas, lymphomas and neuroblastomas) [8]. The platinum-based drugs bind DNA and induce irreversible changes that prohibit tumor cell division. However, common adverse effects of platinum-based drugs include nephrotoxicity and ototoxicity. This toxicity is being attributed to an excessive production of reactive oxygen species that leads to death of auditory hair cells [9][10][11]. Clinically, patients develop permanent bilateral hearing loss that originates in high frequencies [12]. In addition, patients may have difficulties with speech understanding in noise [13].
Aminoglycosides are a group of antibiotics used to treat gram-negative bacterial and mycobacterial infections. Clinically used aminoglycosides include amikacin and kanamycin (primarily cochleotoxic) as well as gentamicin, streptomycin and tobramycin (primarily vestibulotoxic) [14]. Similar to the ototoxic mechanism of platinum-based drugs, aminoglycosides induce excessive formation of free oxygen species followed by apoptosis of sensory hair cells [10,15]. The aminoglycoside-induced hearing loss is bilateral and permanent and starts in the high frequencies. Precisely because of its ototoxic properties, gentamicin is frequently used for the treatment of Ménière's disease in the form of intratympanic injections to deplete the vestibular hair cells and thus, to prevent frequent vertigo attacks.
Of note: About 30% of the world population is infected with Mycobacterium tuberculosis [16]. The treatment of tuberculosis (especially that caused by multiple-drug resistant Mycobacterium tuberculosis) includes intravenous administration of so-called ond-line antibiotics-amikacin, kanamycin and streptomycin-leaving at least 20% of the patients with serious permanent hearing impairment [17].
Loop diuretics are a group of drugs that inhibit renal reabsorption of sodium, chloride and potassium. They are often used to treat kidney insufficiency or heart failure. Loop diuretics include furosemide, bumetanide, ethacrynic acid and torsemide. Their ototoxic mechanism involves inhibition of potassium resorption occurring in the stria vascularis and consequent decrease in the endocochlear potential [18]. The hearing loss induced by loop diuretics is bilateral and usually reversible; however, since loop diuretics are known to synergize with platinum-based drugs or with aminoglycosides in their ototoxic action, in patients receiving drugs from both groups, loop diuretics may worsen the degree of permanent hearing loss [19][20][21].
There is a growing number of case reports and larger studies indicating that the family of ototoxic drugs is growing and embraces newly developed medications. Although the ototoxic properties of several pharmacological drugs were recently compiled in an excellent review written by Cianfrone et al. [22], the clinical information changes and requires update.
In this chapter, we review selected group of frequently used, contemporary pharmacological drugs (phosphodiesterase-5 blockers and antiviral drugs (see Table 1), painkillers and immunosuppressants) in aspect of audiologically important adverse reactions including hearing loss and tinnitus. [38] Although in the industrialized countries, the hepatitis C and B therapy with pegylated or non-pegylated interferons and ribavirin is being replaced by other pharmacological regimes, one should not ignore the fact that not all countries and hospitals have adopted the new routine and that the interferons are still in use, possibly contributing to drug-related hearing loss.
Phosphodiesterase-5 (PDE5) inhibitors
PDE5 inhibitors block the phosphodiesterase-5 in the smooth muscle cells lining the blood vessels in the cardiovascular system. Phosphodiesterase-5 degrades cyclic GMP, regulating smooth muscle tone. The first PDE5 inhibitor-sildenafil-was introduced in the market in 1998 under the name Viagra. PDE5 inhibitors are used for the treatment of erectile dysfunctions and for pulmonary artery hypertension.
In the year 2007, an alarming report was published by Mukherjee and Shivakumar, in which a case of bilateral profound sensorineural hearing loss was described in 44-year-old man who took 50 mg/day of sildenafil for 2 weeks [23]. Based on that report, FDA issued a warning about possible sudden hearing loss among users of PDE5 inhibitors.
Over the past 10 years, evidence suggesting negative influence of PDE5 inhibitors on hearing has accumulated. In a clinical study, Okuyucu et al. [24] reported significant but reversible unilateral hearing loss in four of 18 patients taking PDE5 inhibitors. The hearing loss affected the right ear at 10,000 Hz (p = 0.008).
Much larger epidemiological study published 1 year later by McGwin [25] evaluated the relationship between hearing loss and the use of PDE5 inhibitors in a population-based sample. This USA-based study was designed using self-reported hearing impairment and PDE5i use and included over eleven thousand men who were 40 years or older. Results of this study indicated that men with hearing loss are more than twice as likely to use PDE5 inhibitors, when compared with those not reporting hearing loss. However, no causal relationship could be established in that study.
In 2011, Khan et al. [26] published a report based on data provided by pharmacovigilance agencies Europe, the Americas, East Asia and Australasia, and on published reports. The authors identified among PDE5 inhibitor users 47 cases of sensorineural hearing loss, most of them unilateral. Almost 70% of the subjects (mean age 56.6 years, men-to-women ratio 7:1) reported hearing loss within 24 h after ingestion of PDE5 inhibitors.
In 2012, unilateral sudden sensorineural hearing loss affecting two male PDE5 inhibitor users (age 37 and 43) was described by Barreto and Bahmad [27]. Unfortunately, neither the time after the hearing loss has occurred nor the dosage of PDE5 inhibitors was stated. In addition to the hearing loss, both patients were affected by vertigo and tinnitus. After combination therapy consisting of steroids administered orally and intratympanically, one of the patients recovered partially, whereas the other one was left with permanent profound sensorineural hearing loss.
The causal relationship between the PDE5 inhibitors and (sudden) sensorineural hearing loss remains to be confirmed using experimental models. Au and colleagues using the animal model (C57BL/6J mice) and sildenafil (Viagra) were unable to find the differences in hearing thresholds between the drug-and placebo-treated animals [28]. However, other functional studies in mice with the use of osmotic pumps for drug release demonstrated that the inner ear of animals exposed to sildenafil reacted with hydrops [29].
The epidemiological and case report data indicate that PDE5 inhibitors may have general negative impact on hearing. Moreover, PDE5 inhibitors may induce sudden sensorineural hearing loss that in some cases can be successfully treated with corticosteroids; in some other cases, the patients recover without any treatment; and lastly, it can also leave patients with permanent hearing impairment.
Interferons
Interferons (IFN) are a group of naturally occurring proteins that are released by several cell types in response to infection or tumors. There are three classes of interferons: type I, type II and type III. Type I interferons include IFN-alpha and IFN-beta. Synthetic and recombinant interferons, alpha and beta, have been used for therapy of viral infections with hepatitis C or B virus. In addition, IFN-beta can be used to treat multiple sclerosis.
One of the first reports associating interferon treatment with hearing loss was published in 1994 [30]. In that report, a group of 49 patients (32 men and 17 women, mean age 48.6, age range 23-67) receiving various brands of interferons for chronic hepatitis B or C were assessed with pure tone audiometry before the onset of treatment and then in consecutive 1-week interval. In case of IFN-alpha, the drug was administered i.m. each day for 2 weeks and then three times a week for 14-22 weeks. In case of IFN-beta, the drug was administered i.v. daily for 6 weeks. The study demonstrated that 45% (22 patients) developed auditory dysfunction: 14 patients (29%) reported having tinnitus and 18 patients (35%) were diagnosed with sudden sensorineural hearing loss. More than half (56%) of the patients treated with interferonbeta (total of 27 subjects) developed auditory disability with unilateral or bilateral hearing loss affecting various frequencies diagnosed in 11 patients (41%). In the group treated with IFN-alpha (total of 22 subjects), seven developed unilateral hearing loss affecting 8000 Hz. Progressive hearing loss leads in two cases to withdrawal from therapy. There was no association between the clinical parameters such as proteinuria, leucopenia, liver functions and the hearing loss. Interestingly, all patients recovered within 2 weeks after finishing the interferon treatment.
Published 1 year later, prospective audiological study of 73 patients treated with IFN-alpha or IFN-beta for hepatitis confirmed the above observations, including the hearing loss exclusively affecting 8000 Hz in patients receiving IFN-alpha [31]. There was, however, one difference: in the larger sample studied, the hearing abilities of one patient have not recovered after discontinuation of therapy. Later, studies confirmed majority of these observations [32] and most importantly the general reversibility of ototoxic effects of IFN-alpha [33].
Interesting mechanistic insights of IFN-alpha-induced ototoxicity were delivered from studies using mouse model [34]. There was an elevated ABR threshold in mice treated with IFN-alpha as compared to untreated control group. Moreover, histological findings of cochleae dissected from experimental animals indicated abnormalities in the number (lower) and appearance (cytoplasmic vacuolization) of the spiral limbus fibroblasts in the IFN-alpha-treated mice.
These findings point to direct negative effect of IFN-alpha on cochlear biology, which may result in the hearing loss.
Ribavirin is a guanosine analog (nucleoside inhibitor) that stops viral RNA synthesis. It is used to treat various viral hemorrhagic fevers, and it is the only known drug against rabies.
Although new therapeutic approaches are being introduced on the healthcare market for the treatment of hepatitis C (e.g., protease inhibitor telaprevir or boceprevir), ribavirin in combination with PEG-IFN-alpha is still a part of the current standard of care (SOC) therapy in some countries and it is also included in the new therapeutic regime.
Therapeutic use of PEG-IFN and ribavirin in hepatitis C infections induces similar otological effects as the therapy with non-pegylated interferons only. However, there is one major difference: the hearing abilities do not recover in the majority of cases. Although some reports describe no hearing disabilities [35] or sudden unilateral sensorineural hearing loss resolving spontaneously within 2 weeks after the end of treatment [36], some other demonstrate that patients may develop irreversible unilateral hearing loss [37] or irreversible unilateral pantonal hearing loss (measured by pure tone audiometry) and tinnitus [38].
Inhibitors of viral reverse transcriptase
According to the United Nations AIDS organization, approximately 36.7 million people worldwide are infected with the HIV virus. The patients with HIV are treated with drugs that inhibit the virus proliferation. Since HIV virus uses very unique enzyme to copy its genome, this enzyme-reverse transcriptase-is a pharmacological target of anti-HIV therapy. The unique thing about the HIV therapy is that it should never be stopped, even if the viral load is undetectable.
The discovery and the beginning of clinical application of reverse transcriptase inhibitors date back to the eighties last century. The first reports about their negative effect on hearing appeared some 10 years later and ever since conflicting conclusions are being drawn from several studies. In some studies, authors found the hearing loss among 30% of HIV patients taking the reverse transcription inhibitors [39][40][41], whereas in other studies, no association between audiological impairment and antiviral medication was found [42,43]. Various inclusion criteria, diverse outcome measure methods, sample size and many other factors could contribute to these dissimilar results.
In the controlled environment of experimental laboratory, the results look much more uniform and point at universal ototoxicity of all types of reverse transcriptase inhibitors that are on the market, as measured by the viability of auditory epithelial cell line exposed to various concentrations of 14 types of pharmacological reverse transcription inhibitors as single agents and in combination, as used in the clinics [44].
Paracetamol (acetaminophen) and hydrocodone
Paracetamol, also known as acetaminophen (in the USA and Canada) or APAP, is the most commonly used pain killer in North America and Europe. It inhibits selectively cyclooxygenase-2 (COX-2) and may also exert other pain-relieving functions. Recent studies on selfreported professionally diagnosed hearing loss and use of analgesics indicated that regular use of paracetamol significantly increases the risk of hearing loss in men [45] and women [46]. The large size of samples with which the studies were performed (26,917 men and 62,261 women) makes both studies particularly credible.
The main conclusion from this study was that the long-term use of paracetamol (acetaminophen) increases the risk of developing hearing loss in men and women.
The mechanism of paracetamol-induced hearing loss was experimentally addressed in vitro [47]. The authors demonstrated that in the mouse auditory epithelium cell line, paracetamol and its metabolite NAPQI (N-acetyl-p-benzoquinoneimine) induce ototoxicity by causing oxidative stress as well as endoplasmic reticulum (ER) stress. These basic research results possibly explain the ototoxicity seen in people who regularly consume paracetamol. The question about usage of paracetamol and its frequency should be included in the surveys/questionnaires of patients with otologic and audiologic considerations.
Hydrocodone is a semi-synthetic opioid used for pain therapy and in common anti-cough medications. Hydrocodone is often prescribed in combination with paracetamol. In a report describing 12 patients with a history of hydrocodone overuse and progressive irreversible sensorineural hearing loss, the authors implicated nonresponsiveness of this type of hearing loss to corticosteroid therapy [48]. The authors reported that seven of eight patients who underwent consecutive cochlear implantation benefited from this type of auditory rehabilitation. Similar recent case report described a patient with unilateral hearing loss attributed to abuse of hydrocodone and paracetamol [49]. Also this patient was treated with cochlear implant.
The information delivered from the in vitro model with auditory epithelial cell line suggested that the combination of hydrocodone and paracetamol results in ototoxicity not due to hydrocodone but rather due to paracetamol [50]. The authors suggested that the contribution of hydrocodone to clinically seen ototoxicity may lay in hydrocodone assisting the addiction to the drug combination.
Methadone
Methadone is an opioid drug for treating pain. In addition, it is used for therapy of people addicted to opioids.
In the year 2014, about 7 million US citizens were abusing prescription drugs (source: National Center for Health Statistics). One of these drugs is methadone. Six recent case reports exposed an unknown before side effect of methadone abuse-the hearing loss [51][52][53][54][55]. The patients described in reports were young (age range 20-37) and were admitted to the hospitals because of methadone overuse. In all reported cases, the patients were deaf upon awakening (one perceived tinnitus), and in four of six cases, hearing loss was only temporary condition. The remaining two patients were unfortunately left with severe sensorineural hearing loss for the remaining observation time (2 and 9 months).
Immunosuppressant calcineurin inhibitors (cyclosporine A and tacrolimus)
Since the beginning of transplantology in the sixties, several people with incurable diseases of liver, kidney and other organs received the donor tissues as therapeutic procedure. This type of therapy is combined with an inevitable immune reaction against the non-self tissue. To prevent these reactions, immunosuppressants are used. Among them, cyclosporine A and tacrolimus (FK506) are commonly used to prevent graft rejection reaction. Both drugs decrease in various ways the activation of lymphocytes T and thus inhibit the graft rejection process. The immunosuppressants must be taken continuously.
Rifai et al. [56] performed a large study involving 521 liver transplant patients. The study was based on self-reported hearing loss and showed that of 521 individuals, 141 (27%) developed hearing loss following transplantation, particularly in those patients who were receiving tacrolimus as principal immunosuppression. This study was followed by recent trial, where instead of self-reported hearing loss, audiometric measurements were performed [57]. Of 70 liver transplant patients included in that study, 32 reported hearing loss and tinnitus following the transplantation. The types of hearing loss included sudden hearing loss and progressive hearing loss, which developed more than 3 years after transplantation. Audiometry confirmed the patients' reports and identified 12 patients with mild, 28 with moderate and 25 with severe hearing loss following the transplantation. The association between tacrolimus and hearing loss was seen again in this study.
Another group of transplant patients is the renal transplant group. Kidney transplantation is a surgical procedure performed since the mid-fifties last century; however, postsurgical survival was very low, because of the graft rejection [58]. The introduction of cyclosporine A in the eighties significantly improved the post-transplant survival rate but brought another type of problems, namely adverse reactions such as hearing loss. Renal patients are known to often have hearing impairments [59], and it was shown that the renal transplantation restores the hearing abilities, when measured 1 year after surgery. However, some renal transplant patients who are on a long-term immunosuppressant therapy such as cyclosporine A or tacrolimus develop hearing disabilities including sudden sensorineural hearing loss [60][61][62] or a progressive hearing loss [63].
Particularly, worrying tendency is seen among the pediatric renal transplant patients. A prospective study of 27 children (mean age 14) with normal hearing prior to kidney transplantation determined after a mean follow-up of 30 months that 17 children developed sensorineural hearing loss [60]. Two of 17 children were diagnosed with sudden hearing loss and the rest of the group with a progressive bilateral hearing loss.
It is likely that the ototoxic effect of immunosuppressants depends on the length of time of intake. Groups studying noise-induced hearing loss have successfully used cyclosporine A and tacrolimus to protect the auditory epithelium in mice from the noise-induced injury [64]. However, the dosage was single and not-like in the case of transplant patients-years long.
The treatment of hearing impairment occurring in organ transplant recipients includes hearing aids and cochlear implantation [65]. However, one should not ignore the fact that these patients are immunocompromised, and therefore, the risk of wound infection after CI should be taken under consideration during postsurgical management.
Similarly, substances known to damage mitochondria such as aminoglycosides or cisplatin are known as ototoxic and contribute significantly to the hearing loss and tinnitus [73].
The substances listed in the present chapter can all damage the mitochondria. The damaging mechanism varies, and for instance, IFN-alpha impairs the transcription of mitochondrial DNA, whereas nucleoside analogues impair the replication of mitochondrial DNA [74]. In agreement with this, severe mitochondrial toxicity manifested by hyperlactatemia and pancreatitis was described in some cases involving patients with HIV/hepatitis C virus treated with pegylated interferon and ribavirin [75]. Paracetamol was also shown to have negative effect on mitochondria by inducing overproduction of reactive oxygen species (ROS) and inducing endoplasmic reticulum stress [47,76]. Methadone was shown to impair synthesis of mitochondrial ATP leading to bioenergetics crisis of the affected organism [77]. The reverse transcriptase inhibitors used to slow down the replication of HIV virus were likewise demonstrated to induce mitochondrial toxicity [78,79]. Lastly, cyclosporine A was shown to inhibit adenine nucleotide net transport into the mitochondria [80], whereas tacrolimus was associated with decreasing the levels of oxidative phosphorylation in mitochondria [81].
Since the negative effect of various drugs on mitochondria likely results in a damage of hearing, it is plausible that the mitochondria-supporting substances (such as coenzyme Q10, vitamin B12 with folic acid, sirtuin and many others) given as auxiliary therapy could protect the sense of hearing in patients with hepatitis, HIV, transplant patients or painkiller or PDE5 inhibitor users. In fact, targeting mitochondria is becoming increasingly popular [82], and there were some successful attempts in treatment of hearing conditions using mitochondrial supplements [83][84][85][86][87][88][89].
Conclusions
The appearance of new drugs to treat ever more conditions is an inevitable and welcomed progress of medical and pharmaceutical sciences. However, assuring the drug safety in terms of hearing disability is difficult, and it often requires very long and regular intake periods, which are outside of regular phase I, II or III clinical trials. As for the duration of phase IV (the postmarketing surveillance trials), which is usually set for 2 years, perhaps it could be extended specifically for the monitoring of audiological conditions.
The ototoxicity of prescription or over-the-counter drugs is a global problem. Collaboration between audiologists or otologists and other healthcare providers is necessary to protect the patient's auditory health. Auditory consultations ought to be a routine during the treatment of patients with viral hepatitis C or B receiving interferons and ribavirin or HIV-positive individuals taking anti-reverse transcriptase drug cocktail. Moreover, patients undergoing solid organ transplantation should be audiologically monitored. The option of audiological care for children treated for any of the above infectious diseases or undergoing transplantation should be presented to their parents. Lastly, frequent users of painkillers and recreational drugs should be informed about the risks of the medications they are reaching for every day.
During the unavoidable drug therapies, preventive means such as mitochondrial protection and supplementation during the drug treatment and audiological monitoring as well as fitting the patients with hearing aids or cochlear implants, could help to keep the hearing healthy or at least to restore it to some degree.
The good condition of hearing is as important as that of heart, lung or other organs. Informing the community about ototoxicity and keeping up to date with the case reports and other scientific communications may help to save the sense of hearing.
Glossary of terms
Cyclosporine A Fungal metabolite that suppresses immune reaction. It inhibits the activation of lymphocytes T by binding to cyclophilin and inhibiting calcineurin Endocochlear potential Voltage of +80mV in the scala media, generated by the stria vascularis, essential for the auditory transduction
|
v3-fos-license
|
2024-07-25T15:08:55.077Z
|
2024-07-01T00:00:00.000
|
271420043
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "d068a5d5b8c11cf5c36be2fdfe0c7d5d01d804c9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46535",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "53ef98831d32d4d2f756ae9552bd3f49fcfdbf9a",
"year": 2024
}
|
pes2o/s2orc
|
Ribosome Pausing Negatively Regulates Protein Translation in Maize Seedlings during Dark-to-Light Transitions
Regulation of translation is a crucial step in gene expression. Developmental signals and environmental stimuli dynamically regulate translation via upstream small open reading frames (uORFs) and ribosome pausing. Recent studies have revealed many plant genes that are specifically regulated by uORF translation following changes in growth conditions, but ribosome-pausing events are less well understood. In this study, we performed ribosome profiling (Ribo-seq) of etiolated maize (Zea mays) seedlings exposed to light for different durations, revealing hundreds of genes specifically regulated at the translation level during the early period of light exposure. We identified over 400 ribosome-pausing events in the dark that were rapidly released after illumination. These results suggested that ribosome pausing negatively regulates translation from specific genes, a conclusion that was supported by a non-targeted proteomics analysis. Importantly, we identified a conserved nucleotide motif downstream of the pausing sites. Our results elucidate the role of ribosome pausing in the control of gene expression in plants; the identification of the cis-element at the pausing sites provides insight into the mechanisms behind translation regulation and potential targets for artificial control of plant translation.
Introduction
Modulation of translation is a well-conserved mechanism that regulates gene expression in numerous species, from bacteria to animals and plants [1].In mammalian cells, translation is modulated to fine-tune embryonic development, cell differentiation, and the response to nutrient scarcity [2,3].In plants, translation can be quickly regulated by biotic and abiotic stresses, showing potential in crop breeding for improved disease resistance [4].Over the past decades, the general mechanisms of translation initiation, elongation, termination, and recycling have become well understood.Recent studies have revealed that the translation efficiency of individual transcripts is differentially regulated under various conditions.How the translation machinery recognizes various transcripts and adjusts the moving pace of translating ribosomes remains elusive.
The development of ribosome profiling (Ribo-seq), based on the high-throughput sequencing of mRNA fragments protected by bound ribosomes following the digestion of polysomes with RNaseI, has created avenues for the monitoring of translation efficiency for individual transcripts at single-codon resolution [5][6][7][8].Analysis of ribosome occupancy on actively translated transcripts has identified many genomic loci that were previously considered noncoding regions, including 5 ′ untranslated regions (UTRs) and long noncoding RNA (lncRNA) [9].In yeast (Saccharomyces cerevisiae) and mammalian cells, Ribo-seq data revealed many upstream open reading frames (uORFs) located in the 5 ′ end of the main open reading frame (mORF) encoded by a given transcript; these uORFs repress mORF translation [10,11].Accumulating evidence from Ribo-seq and biochemical experiments has demonstrated that uORFs inhibit the translation of their downstream mORFs by several mechanisms, including depletion of limited transfer RNAs (tRNAs), disassociation of ribosomes from RNAs, and triggering of nonsense-mediated RNA decay (NMD) [12][13][14].Comparative analyses between mice (Mus musculus), humans (Homo sapiens), and zebrafish (Danio rerio) revealed that uORFs are depleted from the regions near coding sequences (CDSs) and that the repression of their cognate mORFs is conserved across these species [15].Although it was known that ribosomes do not always move along mRNAs at a constant rate, the precise position(s) of ribosome pausing became accessible only with the implementation of Ribo-seq.
Accumulating lines of evidence indicate that ribosome pausing at a given site on a transcript prolongs the probability for nascent peptides to interact with protein partners, facilitating co-translational folding, complex assembly, organelle targeting, and chaperone recruitment.In addition, ribosome pausing can be induced by environmental conditions such as nutrition scarcity and heat stress.For example, arginine limitation in mammalian cells led to ribosome pausing and lower protein biosynthesis rates [16].Aging also induced ribosome pausing in the nematode Caenorhabditis elegans and budding yeast, resulting in an overload of the ribosome-related quality control system and the aggregation of nascent peptides [17].
More factors influencing ribosome pausing are being identified, such as the secondary structures of mRNAs, contiguous proline residues in nascent peptides, and aggregation of positive charges at peptide chain exits; notably, codon usage appears to have limited influence [18].Eukaryotic initiation factor 5A (eIF5A), a ribosomal pause relief factor, was shown to regulate the selection of the translation initiation codon and maintain the fidelity of translation initiation in humans and yeast [19].Another study unveiled the connection between RNA methylation and translation: ribosome pausing was detected at the initiation codon, which acted like a "brake" to restrict protein output.This mechanism was regulated by N 6 -methyladenosine (m6A) methylation modification near the initiation codon [20].
The mechanisms of translational regulation in plants are now better understood, thanks in large part to the Ribo-seq technique.Numerous genes were shown to be under translational modulation to promote photomorphogenesis and chloroplast development when plants become exposed to light [21][22][23].Translational regulation is also involved in plant acclimation to environmental stresses.Indeed, drought, hypoxia, salinity stress, and nitrogen depletion also induce translational regulation in plants, with thousands of uORFs in transcripts bound by ribosomes in different plant species [24][25][26][27].
In Arabidopsis (Arabidopsis thaliana), uORFs upstream of TL1-BINDING FACTOR 1 (TBF1), a key gene encoding an immunity-related transcription factor, repress TBF1 translation and play a vital role in resistance against pathogenic bacteria.Surprisingly, transformation of rice (Oryza sativa) plants with a uORF TBF1 -AtNPR1 construct comprising uORF TBF1 placed upstream of the coding sequence for the immunity gene NONEXPRESSER OF PR GENES 1 (AtNPR1) significantly enhanced blight resistance of rice without growth penalty, suggesting conservation of uORF-mediated translation regulation [28,29].Further studies demonstrated that hairpin structures downstream of the uORF were required to suppress the translation of the mORF upon bacterial invasion [30].
Although recent progress indicates that translational regulation is a general and important mechanism by which plants respond to various developmental and environmental signals, whether plants undergo ribosome pausing is currently poorly understood.This study aims to reveal the genes under the regulation of ribosome pausing and identify the sequence features triggering the ribosome pausing in plants.
Light is not only the most important energy source for plant growth and development by powering photosynthesis, but it also provides essential signals to shape plants as they grow.Photomorphogenesis is the developmental program induced in dark-grown (etiolated) seedlings when they first perceive light.Previous studies in Arabidopsis and maize (Zea mays) seedlings have suggested the participation of translational regulation in plant responses to light, although no evidence for ribosome pausing has been reported [21][22][23].
Here, we exposed etiolated maize seedlings to light and performed transcriptome deep sequencing (RNA-seq) and Ribo-seq analyses at several time points.We identified numerous ribosome-pausing events in maize seedlings establishing photomorphogenesis that negatively regulated the translation efficiency of their cognate proteins.Furthermore, the conserved cis-element associated with ribosome pausing was revealed.Our finding provided global evidence that plant genes were broadly regulated by ribosome pausing, and the identified motif triggering the ribosome pausing could be utilized in artificial modulation of translation for plant transcripts and the molecular breeding of crops.
Translational Regulation Responds Quickly to Early Light Exposure
We exposed 6-day-old etiolated maize seedlings from the inbred line B73 to various durations of white light (330 µmol/m 2 /s) and collected samples immediately before transfer to light (0 h time point) and after 0.5, 1, 2, and 4 h of light treatment for the generation of matched Ribo-seq and RNA-seq libraries.For each Ribo-seq library, we obtained over 12 million raw reads (Table S1).From these, we removed ribosomal RNAs (rRNAs) with specific probes, retaining 6% to 30% of the initial raw reads (Table S1).We mapped the remaining reads to the maize reference genome, which revealed lengths for most ribosomeprotected fragments (RPFs) as 27 nucleotides (nt) to 31 nt (Figure S1A), a range consistent with previous reports [8, 23,24].Over 49% of the identified RPFs were generated from annotated protein-coding genes, and more than 50% of the RPFs were located in short open reading frames (sORFs), including independent uORFs, downstream open reading frames (dORFs), and some uORFs overlapping with mORFs (Figure 1A).This study aims to reveal the genes under the regulation of ribosome pausing and identify the sequence features triggering the ribosome pausing in plants.
Light is not only the most important energy source for plant growth and development by powering photosynthesis, but it also provides essential signals to shape plants as they grow.Photomorphogenesis is the developmental program induced in dark-grown (etiolated) seedlings when they first perceive light.Previous studies in Arabidopsis and maize (Zea mays) seedlings have suggested the participation of translational regulation in plant responses to light, although no evidence for ribosome pausing has been reported [21][22][23].Here, we exposed etiolated maize seedlings to light and performed transcriptome deep sequencing (RNA-seq) and Ribo-seq analyses at several time points.We identified numerous ribosome-pausing events in maize seedlings establishing photomorphogenesis that negatively regulated the translation efficiency of their cognate proteins.Furthermore, the conserved cis-element associated with ribosome pausing was revealed.Our finding provided global evidence that plant genes were broadly regulated by ribosome pausing, and the identified motif triggering the ribosome pausing could be utilized in artificial modulation of translation for plant transcripts and the molecular breeding of crops.
Translational Regulation Responds Quickly to Early Light Exposure
We exposed 6-day-old etiolated maize seedlings from the inbred line B73 to various durations of white light (330 µmol/m 2 /s) and collected samples immediately before transfer to light (0 h time point) and after 0.5, 1, 2, and 4 h of light treatment for the generation of matched Ribo-seq and RNA-seq libraries.For each Ribo-seq library, we obtained over 12 million raw reads (Table S1).From these, we removed ribosomal RNAs (rRNAs) with specific probes, retaining 6% to 30% of the initial raw reads (Table S1).We mapped the remaining reads to the maize reference genome, which revealed lengths for most ribosome-protected fragments (RPFs) as 27 nucleotides (nt) to 31 nt (Figure S1A), a range consistent with previous reports [8, 23,24].Over 49% of the identified RPFs were generated from annotated protein-coding genes, and more than 50% of the RPFs were located in short open reading frames (sORFs), including independent uORFs, downstream open reading frames (dORFs), and some uORFs overlapping with mORFs (Figure 1A).Taking 29 nt reads at the 2 h time point as an example, the RPFs showed a clear 3 nt periodicity along the CDS, with the first nucleotide of most RPFs mapping to the +1 position of each codon (Figures 1B and S1B).Other libraries showed a similar periodicity (Figure S2).Along mRNAs, the RPFs accumulated about 13 nt upstream of the start codon and about 16 nt upstream of the stop codon, corresponding to the positions where ribosomes assembled and dissociated from the transcript, respectively.The nucleotide periodicity and the distribution pattern of the RPFs reflect the typical features of Ribo-seq data.
To assess the dynamics of translational regulation in response to light, we looked for differentially translated genes (DTGs) from four pairwise comparisons (0-0.5 h, 0.5-1 h, 1-2 h, 2-4 h, and 0.5-4 h).We discovered that translational regulation mainly occurs during the 0-0.5 h period, as the number of DTGs dramatically decreased for the later comparisons (Figure 2A-E).The shift from darkness to light is an abrupt environmental change for seedlings; the large number of DTGs at the earliest time point reflected the prompt responses of translational regulation to cope with sharp environmental fluctuations.This observation is in line with the idea that translational regulation is fast and can respond to environmental signals quickly [31].Although the pairwise comparisons between 0.5 and 1 h, 1 and 2 h, and 2 and 4 h periods yielded no DTGs, the comparison of 0.5-4 h resulted in many DTGs, which may be attributed to a stable environment that led to a slow accumulation of gene expression changes (Figure 2E).Taking 29 nt reads at the 2 h time point as an example, the RPFs showed a clear 3 nt periodicity along the CDS, with the first nucleotide of most RPFs mapping to the +1 position of each codon (Figures 1B and S1B).Other libraries showed a similar periodicity (Figure S2).Along mRNAs, the RPFs accumulated about 13 nt upstream of the start codon and about 16 nt upstream of the stop codon, corresponding to the positions where ribosomes assembled and dissociated from the transcript, respectively.The nucleotide periodicity and the distribution pattern of the RPFs reflect the typical features of Ribo-seq data.
To assess the dynamics of translational regulation in response to light, we looked for differentially translated genes (DTGs) from four pairwise comparisons (0-0.5 h, 0.5-1 h, 1-2 h, 2-4 h, and 0.5-4 h).We discovered that translational regulation mainly occurs during the 0-0.5 h period, as the number of DTGs dramatically decreased for the later comparisons (Figure 2A-E).The shift from darkness to light is an abrupt environmental change for seedlings; the large number of DTGs at the earliest time point reflected the prompt responses of translational regulation to cope with sharp environmental fluctuations.This observation is in line with the idea that translational regulation is fast and can respond to environmental signals quickly [31].Although the pairwise comparisons between 0.5 and 1 h, 1 and 2 h, and 2 and 4 h periods yielded no DTGs, the comparison of 0.5-4 h resulted in many DTGs, which may be attributed to a stable environment that led to a slow accumulation of gene expression changes (Figure 2E).To assess the consistency in pattern between the changes in transcript levels and translation, we compared the differentially expressed genes (DEGs) and DTGs at various time points into light exposure (Figure 2F).From 0 to 0.5 h, we identified 1800 upregulated DEGs and 159 upregulated DTGs, with 68 shared genes.We also identified 1071 downregulated DEGs and 116 downregulated DTGs, with 35 shared genes.These shared genes accounted for about 3% of all DEGs but over 30% of DTGs.For the pairwise comparison of 0.5-4 h, we obtained 4977 upregulated DEGs and 137 upregulated DTGS, of which 115 were shared; likewise, we identified 3383 downregulated DEGs and 89 downregulated DTGs, of which 73 genes were shared.For this longer period, the shared genes accounted for about 2% of all DEGs and about 80% of all DTGs.Thus, the vast majority of changes in transcript levels are not accompanied by changes in translation, especially over short time periods.In addition, most changes in transcript and translation levels were in the same direction.Our results indicate that the consistency between transcript levels and translation dynamically changes.In the earliest stage of light exposure (0-0.5 h), the genes regulated in the same direction for their translational output and their transcript levels represent fewer than 43%, while translational regulation is highly consistent with transcript abundance in the later stage (0.5-4 h) (>82%).
Genes Are Differentially Regulated at the Transcript and Translation Levels during Photomorphogenesis
The poor apparent correlation between transcript levels and their translational level prompted us to check individual genes with different regulatory patterns at the two levels (Figure 3A-D).In the first 0.5 h of light treatment, most of the genes encoding photomorphogenesis regulatory factors remained unchanged either transcriptionally or translationally, except for ELONGATED HYPOCOTYL5 (HY5) and its homolog Zm00001d039658, which were upregulated at both levels (Figure 3A).Among the negative regulators of photomorphogenesis, CONSTITUTIVE PHOTOMORPHOGENIC1 (COP1), FUSCA5 (FUS5), B-BOX19 (BBX19), and LIGHT-RESPONSE BTB1 (LRB1) were upregulated at the translational level, while their transcript levels were constant.BBX20, another negative regulator of photomorphogenesis, was downregulated at the translational level but unchanged at the transcript level (Figure 3A).Most of the transcripts from chloroplast genes were unchanged at the translational level, while the transcript levels of some chloroplast genes encoding ribosomal proteins, including rpl14, rpl20, rpl32, rpl33, and rpl36, were decreased, while those of psbC, psbL, psaC, psbF, psbT, psbN, and rpl22 were increased (Figure 3A).
From 0.5 to 1 h, most photomorphogenesis-related transcripts and chloroplast transcripts remained unchanged at both the transcript and translation levels (Figure 3B).During the period from 1 to 2 h, PHYTOCHROME-INTERACTING FACTOR5 (PIF5) was specifically downregulated at the transcript level, while FAR-RED INSENSITIVE219 (FIN29) was specifically upregulated at the translational level.Some chloroplast transcripts encoding chloroplast ribosome proteins and photosystem components were significantly upregulated at the transcript level but not at the translational level (Figure 3C).With prolonged light exposure (more than 2 h), some chloroplast transcripts were upregulated at the translational level with mild changes in their transcript levels (Figure 3D).Notably, HY5 transcript levels were lower, but the translation level remained unchanged.
We calculated the translation efficiency (TE) of each transcript to evaluate its utilization by ribosomes [32].In general, the TE of most transcripts, including key regulators of photomorphogenesis, did not change much, remaining below the significance threshold set for this study (|Z-score| > 1.5).However, several other regulators of photomorphogenesis were affected by exposure to light.For example, the TE of FUS5 was slightly enhanced at 0.5 h compared to the dark control (Figure 3E), and the TE of COP1 moderately decreased under 0.5 to 1 h of light exposure (Figure 3F).The TE of EIN3-BINDING F BOX PRO-TEIN (EBF) and HY5 increased significantly between 1 and 2 h and after 2 h, respectively (Figure 3G,H).Following 0.5 h of light exposure, the TE of the chloroplast genes psaC, rpl22, rps14, and psbF were higher compared to their TE in darkness.From 1 to 2 h of light exposure, some photosystem genes showed lower TEs (Figure 3G), while the transcripts encoding chloroplast ribosome proteins and electron transfer chain components exhibited the opposite pattern with higher TEs (Figure 3H).exhibited the opposite pattern with higher TEs (Figure 3H).
To investigate the correlation between the changes in transcript levels and TE, we plotted the fold-changes in RNA abundance and the corresponding TE for each gene.In general, the changes in TE and transcript levels were negatively correlated, especially in the later periods, as might have been expected, with Pearson's correlation coefficients (R) of R0-0.5 h = −0.38,R0.5-1 h =−0.65,R1-2 h = −0.70,and R2-4 h = −0.76(Figure 3I-L).HY5 always maintained efficient translation independently of changes in its transcript levels, suggesting the specific regulation of its TE by light.By contrast, COP1 showed an increase in TE and transcript levels from 0 to 0.5 h, followed by a lower TE but a higher RNA abundance from 0.5 to 1 h.To investigate the correlation between the changes in transcript levels and TE, we plotted the fold-changes in RNA abundance and the corresponding TE for each gene.In general, the changes in TE and transcript levels were negatively correlated, especially in the later periods, as might have been expected, with Pearson's correlation coefficients (R) of R 0-0.5 h = −0.38,R 0.5-1 h =−0.65,R 1-2 h = −0.70,and R 2-4 h = −0.76(Figure 3I-L).HY5 always maintained efficient translation independently of changes in its transcript levels, suggesting the specific regulation of its TE by light.By contrast, COP1 showed an increase in TE and transcript levels from 0 to 0.5 h, followed by a lower TE but a higher RNA abundance from 0.5 to 1 h.
Light Exposure Widely Alleviates Ribosome Pausing in Maize
The inconsistent patterns observed between translation and transcript levels suggested the existence of translational regulation.Ribosome pausing, an important translational regulatory mechanism, has been widely studied in prokaryotes.To investigate whether ribosome pausing participated in photomorphogenesis, we calculated the pausing score for 5,115,551 ribosome-protected sites and identified 466 paused transcripts (pausing score > 50, Z-score > 1.65, and base count > 20 as a cutoff) across five time points (Figure 4A).Overall, most ribosome-pausing events were specific to etiolated seedlings but were quickly released after a short exposure to light (0.5 h).We divided the transcripts undergoing ribosome pausing into five clusters according to their extent of ribosome pausing at each time point (Figures 4B and S3A, Table S2).Cluster 1 exhibited a significant increase in ribosome pausing at 0.5 h, returning to low levels at later time points.In Clusters 2 and 3, the ribosome-pausing levels quickly decreased following exposure to light; in Clusters 4 and 5, the degree of ribosome pausing first dropped sharply before raising again during one of the later time points.We looked specifically for all important regulatory factors of photomorphogenesis and chloroplast transcripts for ribosome-pausing events.However, their respective transcripts were not included in the list of transcripts experiencing significant and high ribosome pausing, suggesting that ribosome pausing is not the main translational regulatory mode of these transcripts, despite some individual chloroplast transcripts displaying some pausing (Figure S6).Some mitochondrion transcripts, by contrast, displayed changes in ribosome pausing (Figure S7, Table S2).
Ribosome Pausing Negatively Regulates Translation in Maize
Based on reports from bacteria, yeast, Arabidopsis, and cancer cells, paused ribosomes negatively regulate protein biosynthesis [36][37][38][39].To examine the effects of ribosome pausing on plant translation, we characterized the TE changes for the genes from each of To understand the functions of the transcripts with ribosome pausing, we performed a Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway and Gene Ontology (GO) term enrichment analysis using the genes from each cluster (Figures 4C, S4 and S5, Tables S3 and S4).Cluster 1 genes were enriched for 'macromolecule modification' and 'regulation of developmental processes'.Zm00001d039637 encodes Golden2-like14 (GLK14), and its homolog EARLY FLOWERING MYB PROTEIN (EFM) in Arabidopsis interacts with a negative regulator of H3K36 methylation [33].Light is an indispensable condition for plant growth, but sudden exposure to light can also cause physiological stress to the etiolated seedlings.Zm00001d023240 encodes the putative Ser/Thr protein kinase BLUE LIGHT SIGNALING1 (BLUS1), which mediates an early step in phototropism signaling in guard cells under light conditions.Various stresses inhibit BLUS1 expression through the production of reactive oxygen species (ROS), thereby closing stomata [34].ALTERNATIVE OXIDASE1a (AOX1a) and the cytochrome P450 gene CYP81D8 encode proteins responsible for ROS clearance, and their expression is induced by elevated ROS levels.The transcript levels of these two genes were upregulated at the 0.5 h time point relative to the 0 h time point (darkness), indicative of ROS accumulation in plant cells (Figures 4C and S3B).The presence of BLUS1 in Cluster 1 suggests that ribosome pausing is involved in acclimation to light stress.
Transcripts in Clusters 2 and 3 were involved in the 'regulation of seedling development', the 'misfolded protein response', the 'hydrogen peroxide metabolic and catabolic process', 'protein dephosphorylation', and 'tRNA maturation'.Zm00001d022045 encodes tRNA (guanosine (18)-2 ′ -O) methyltransferase and is predicted to localize to chloroplasts.The release of ribosome pausing on this transcript in the light would be beneficial for tRNA maturation and chloroplast development.Zm00001d002131 encodes STRESS-ENHANCED PROTEIN2 (SEP2), a chlorophyll-binding protein.In Arabidopsis, the homolog AtSEP2 was transcribed under low-light conditions and rapidly induced by high-light conditions.Our RNA-seq data confirm this conclusion (1.87-fold increase, Figure S5B).The ribosome pausing of this gene was gradually released after 0.5 h, indicating that its light-responsive expression is regulated at the transcript and translation levels.The functions of genes in Clusters 4 and 5 were mainly involved in the 'microtubule-based process' and 'RNA 3 ′ uridylation'.Zm00001d015059 encodes UTP: RNA uridyltransferase 1 (UTR1), the main terminal uridyltransferase (TUTase) responsible for mRNA uridylation.UTR1 can repair de-adenylated mRNA ends and maintain mRNA stability [35].These results show that exposure of etiolated seedlings to light can release ribosome pausing, which may be an important mechanism for the regulation of gene expression to cope with abrupt environmental changes.
We looked specifically for all important regulatory factors of photomorphogenesis and chloroplast transcripts for ribosome-pausing events.However, their respective transcripts were not included in the list of transcripts experiencing significant and high ribosome pausing, suggesting that ribosome pausing is not the main translational regulatory mode of these transcripts, despite some individual chloroplast transcripts displaying some pausing (Figure S6).Some mitochondrion transcripts, by contrast, displayed changes in ribosome pausing (Figure S7, Table S2).
Ribosome Pausing Negatively Regulates Translation in Maize
Based on reports from bacteria, yeast, Arabidopsis, and cancer cells, paused ribosomes negatively regulate protein biosynthesis [36][37][38][39].To examine the effects of ribosome pausing on plant translation, we characterized the TE changes for the genes from each of the five clusters in the 0-0.5 h pairwise comparison (Figure S3C).Notably, most of these paused transcripts did not show significant changes in their TE for this time interval.In Cluster 1, the TE of 15 and 2 transcripts was significantly upregulated and downregulated, respectively.In Clusters 2-5, very few transcripts exhibited an increased TE, with relatively more transcripts experiencing a decrease in their TE during the 0-0.5 h interval.
We did not find any relationship between TE and pausing score (Figure 5A), which prompted us to reassess our analysis method to calculate TE.Accordingly, we randomly chose several paused transcripts; their degree of pausing was always negatively related to the extent of RPF coverage (measured as the ratio of nucleotides in RFPs to those in the CDS) (Figure 5B and Table S5).Usually, TE uses the read counts from Ribo-seq and RNAseq data for single transcripts and ignores ribosome coverage.When transcripts undergo ribosome pausing, the pausing sites will be highly represented among the reads in the Ribo-seq data, whereas the rest of the sites in the transcript will not be captured.Therefore, unevenly distributed RPFs along a transcript undergoing pausing events may have an unchanged or even higher TE.To solve this issue, we calculated the translation intensity (TI, TI transcript = Median RPFs •Coverage RPFs ), which takes into account read coverage, to authentically reflect translation progression.We found the pausing score and TI usually changed in the reverse direction.Taking the 2 h and 0 h time points as an example, we observed that genes with higher pausing scores at 0 h have a lower TI compared to that at 2 h (Figure 5C).We conclude that ribosome pausing negatively regulates translation in plant cells.
Ribo-seq data, whereas the rest of the sites in the transcript will not be captured.Therefore, unevenly distributed RPFs along a transcript undergoing pausing events may have an unchanged or even higher TE.To solve this issue, we calculated the translation intensity (TI, TItranscript = MedianRPFs•CoverageRPFs), which takes into account read coverage, to authentically reflect translation progression.We found the pausing score and TI usually changed in the reverse direction.Taking the 2 h and 0 h time points as an example, we observed that genes with higher pausing scores at 0 h have a lower TI compared to that at 2 h (Figure 5C).We conclude that ribosome pausing negatively regulates translation in plant cells.
To further test for negative regulation of pausing on translation, we performed a tandem mass tag labeling quantitative proteomics analysis of samples from the 0 h and 2 h time points.The proteomics data indicated that the genes whose transcripts are characterized by higher pausing scores produce less protein (Figures 5D and S8).We chose transcripts whose pausing scores decreased by over 100 at the 2 h time point relative to the 0 h time point and individually inspected the quantification data from the proteomics analysis.A total of 14 proteins increased their abundance at 2 h compared to 0 h among 17 significantly changed proteins (Figure S8).In summary, ribosome pausing plays a negative regulatory role in protein biosynthesis in maize.To further test for negative regulation of pausing on translation, we performed a tandem mass tag labeling quantitative proteomics analysis of samples from the 0 h and 2 h time points.The proteomics data indicated that the genes whose transcripts are characterized by higher pausing scores produce less protein (Figures 5D and S8).We chose transcripts whose pausing scores decreased by over 100 at the 2 h time point relative to the 0 h time point and individually inspected the quantification data from the proteomics analysis.A total of 14 proteins increased their abundance at 2 h compared to 0 h among 17 significantly changed proteins (Figure S8).In summary, ribosome pausing plays a negative regulatory role in protein biosynthesis in maize.
A High GC Content Leads to Ribosome Pausing
As an important regulation of protein translation, ribosome pausing occurs at specific positions along certain transcripts.To reveal the possible position preference of ribosome pausing, we calculated the distribution of pausing sites along the scaled length of tran-scripts, which revealed that most pausing sites were localized in the CDS, with few pausing events present in the 5 ′ UTR or 3 ′ UTR.In the CDS, pausing did not display a clear distribution bias, indicating that ribosome pausing might inhibit translation at the elongation step (Figure 6A).quences around the pausing sites or a clear pattern in their charge properties (Figure S9).We thus looked for a sequence feature in the transcripts undergoing ribosome pausing.Importantly, we detected a clear C signal at 13 nt upstream of the pause sites (Figure 6B).This signal should not be caused by RNase I digestion, as RNase I has no base preference [43].We also checked the sequences of random RPFs on transcripts without ribosome pausing and failed to find this C-rich motif, suggesting that the upstream C-rich feature is specific for transcripts subjected to ribosome pausing (Figure 6B).In addition, we compared the GC content between pausing and non-pausing transcripts using 500 bp of sequences on either side of the pausing sites or of random sites from non-pausing RPFs.The GC ratio around the pausing site was higher than that of other ribosome-protected sites not undergoing pausing (Figure 6C).In chloroplasts, the secondary structure of mRNA, contiguous proline residues in the encoded proteins, and positive charges carried by newly formed peptides are all associated with ribosome pausing [18].In the study of bacteria, several factors affected the movement speed of ribosomes, such as Shine-Dalgarno sequences, pro-pro motifs, and charged amino acids mediating specific interactions between the ribosomal exit tunnel and the nascent peptide [37,[40][41][42].To investigate the sequence features linked to ribosome pausing in maize, we analyzed the encoded amino acids around the pausing sites, the charges of their nascent peptide chains, and the sequence conservation of the transcripts.We did not identify clear pro-pro motif characteristics in the nascent peptide sequences around the pausing sites or a clear pattern in their charge properties (Figure S9).We thus looked for a sequence feature in the transcripts undergoing ribosome pausing.Importantly, we detected a clear C signal at 13 nt upstream of the pause sites (Figure 6B).This signal should not be caused by RNase I digestion, as RNase I has no base preference [43].We also checked the sequences of random RPFs on transcripts without ribosome pausing and failed to find this C-rich motif, suggesting that the upstream C-rich feature is specific for transcripts subjected to ribosome pausing (Figure 6B).In addition, we compared the GC content between pausing and non-pausing transcripts using 500 bp of sequences on either side of the pausing sites or of random sites from non-pausing RPFs.The GC ratio around the pausing site was higher than that of other ribosome-protected sites not undergoing pausing (Figure 6C).
To investigate whether pausing genes and other genes have different GC biases, we inspected the GC content of full-length transcripts centered on the RPF sites and scaled the length of the transcripts from the start codon to the pausing site and from the pausing site to the stop codon into 50 bins, respectively.When we aligned the start codon, stop codon, and the pausing site or ribosome-protected site from pausing and non-pausing transcripts, we discovered that the GC content of full-length pausing transcripts is higher than that of the non-pausing transcripts over the entire transcript length (Figure 6D).This result suggests that the GC content distribution of entire transcripts rather than that of a fragment of the transcript is related to ribosome pausing.Most transcripts show a gradual decrease in their GC content from the start codon to the stop codon [44].Although we observed this trend for pausing transcripts and the random set of transcripts, the GC content near the pausing site increased more than at random ribosome-protected sites.To our knowledge, in previous studies on ribosome pausing in bacteria, yeast, and animal cells, no relationship has been described between the GC content of transcripts and ribosome pausing, which may be a unique mechanism for ribosome pausing in plants.
We used MEME tools to search for potential conserved motifs in the regions upstream and downstream of pausing sites separately, identifying the significant motif "CGCCGC-CGCCGCCGCC" (CGC motif) in the 3 ′ region of the pausing sites (Figures 7A and S10).The transcripts containing this CGC motif had a significantly higher pausing score than transcripts lacking this motif (Figure 7B).The free energy of the thermodynamic ensemble of the CGC motif is −2.95 kcal/mol, which facilitates the formation of stem-loop structures based on simulations (Figure 7C).We analyzed the functions of all transcripts with a CGC motif and those experiencing ribosome pausing in our Ribo-seq data.These genes were involved in many important pathways, including signaling transduction, molecular interaction, and transcription (Figure 7D).Our results reveal sequence features that might be previously unknown determinants of ribosome pausing in maize.To investigate whether pausing genes and other genes have different GC biases, we inspected the GC content of full-length transcripts centered on the RPF sites and scaled the length of the transcripts from the start codon to the pausing site and from the pausing site to the stop codon into 50 bins, respectively.When we aligned the start codon, stop codon, and the pausing site or ribosome-protected site from pausing and non-pausing transcripts, we discovered that the GC content of full-length pausing transcripts is higher than that of the non-pausing transcripts over the entire transcript length (Figure 6D).This result suggests that the GC content distribution of entire transcripts rather than that of a fragment of the transcript is related to ribosome pausing.Most transcripts show a gradual decrease in their GC content from the start codon to the stop codon [44].Although we observed this trend for pausing transcripts and the random set of transcripts, the GC content near the pausing site increased more than at random ribosome-protected sites.To our knowledge, in previous studies on ribosome pausing in bacteria, yeast, and animal cells, no relationship has been described between the GC content of transcripts and ribosome pausing, which may be a unique mechanism for ribosome pausing in plants.
We used MEME tools to search for potential conserved motifs in the regions upstream and downstream of pausing sites separately, identifying the significant motif "CGCCGCCGCCGCCGCC" (CGC motif) in the 3′ region of the pausing sites (Figures 7A and S10).The transcripts containing this CGC motif had a significantly higher pausing score than transcripts lacking this motif (Figure 7B).The free energy of the thermodynamic ensemble of the CGC motif is −2.95 kcal/mol, which facilitates the formation of stem-loop structures based on simulations (Figure 7C).We analyzed the functions of all transcripts with a CGC motif and those experiencing ribosome pausing in our Ribo-seq data.These genes were involved in many important pathways, including signaling transduction, molecular interaction, and transcription (Figure 7D).Our results reveal sequence features that might be previously unknown determinants of ribosome pausing in maize.
Discussion
Translational regulation plays pivotal roles when plants are faced with severe environmental changes, but the underlying mechanisms are not clear.In this study, we discovered that maize genes are specifically modulated at the translational level when etiolated seedlings are exposed to light, with a rapid alleviation of translation repression imposed by ribosome pausing.Furthermore, we determined that an upstream cytosine and a downstream CGC-rich motif were conserved features at the pausing sites.These results shed new light on the mechanisms behind translational regulation and provide a potential cis-element as a target for translational control of plant genes.
Translational regulation mainly responds to light during the initial period of light exposure.When seedlings or plants are faced with a marked environmental change, they must react quickly by controlling signal transduction and metabolism in their cells.Light is not only indispensable for plant growth as an energy source, but it is also a major environmental stress for etiolated seedlings.Our results show that translational regulation mainly occurs within 30 min after etiolated maize is exposed to light.For longer light durations, far fewer genes were regulated at the translational level, indicating prompt translational modulation in response to light in maize.Protein translation may constitute an ideal regulatory node for environmental acclimation in plants, as it can quickly promote or attenuate protein production by mobilizing existing RNAs or stopping their translation in cells without de novo transcription or mRNA degradation.Notably, several important regulators of photomorphogenesis were upregulated at the translational level.COP1 is a master regulator of photomorphogenesis by inducing the degradation of key transcription factors such as HY5 and BBX in the light [45,46].We found that COP1, HY5, and two BBX genes were all under translational modulation when etiolated maize seedlings were exposed to light, indicating that protein homeostasis is important for photomorphogenesis and is fine-tuned not only by protein degradation but also by translation.
Ribosome pausing is widespread in plants as a mechanism to repress the translation of individual transcripts.We identified 466 ribosome-pausing events with altered pausing scores among the different time points of light exposure.Consistent with their regulation at the translation level, we detected most of the ribosome-pausing events in etiolated seedlings before light exposure, suggesting a strong relationship between ribosome pausing and darkness.Following the initial transfer into the light, most of the transcripts experiencing ribosome pausing in the dark were released from this repression, leading to global translational changes (0-0.5 h).The pausing scores of individual transcripts increased or decreased during the 0-0.5 h interval.Transcripts for genes involved in macromolecular modifications, such as those encoding protein kinases, phosphatases, and ubiquitin ligases, had lower pausing scores in darkness, suggesting modulation of protein homeostasis and signaling pathways by ribosome pausing in darkness.In addition to providing an energy source and a photomorphogenesis signal, a sudden light illuminating etiolated seedlings also causes physiological stress.Genes involved in stress were also found to be differentially regulated by ribosome pausing.The increased ribosome pausing along BLUS1 transcripts and the decreased ribosome pausing along transcripts of genes involved in the misfolded protein response indicate that light causes severe stress to etiolated seedlings, with an important role for ribosome pausing in alleviating this stress.
Previous studies revealed that ribosome pausing repressed translation progression [2,16,[36][37][38][39][47][48][49][50].Our results agree with this conclusion based on two lines of evidence.First, we observed a scarcity of RPFs around the paused site, which itself is enriched in RPFs.As an unintended consequence, fewer RPFs on both sides of the ribosome-pausing site, together with the high number of RPFs at the paused site, might mask changes in TE compared to random sites with no ribosome pausing.Indeed, the RPF coverage across the CDS was generally lower when ribosome pausing occurred, which prompted us to introduce the parameter TI to reflect true translation activity.We detected a clear negative relationship between ribosome pausing (measured as a pausing score) and translation activity (as estimated by TI).Second, we performed quantitative proteomics analysis of etiolated seedlings (0 h time point) and after 2 h of light exposure, which confirmed the negative relationship between ribosome pausing and protein production.
The secondary structure of mRNA is crucial to causing ribosome pausing.Our study found a significant downstream CGC motif whose presence was strongly correlated with ribosome pausing.Considering the large number of secondary structures in mRNAs, it is unlikely that multiple secondary RNA structures can efficiently impede translation.Several studies have demonstrated that stem-loop structures can slow down translation elongation [18,[51][52][53][54].The length of RNA stem-loops was also reported to be crucial for translational repression [55].Additionally, evidence from Arabidopsis indicated that downstream hairpins can affect the selection of the upstream start codon (uAUG), with the bacterial elicitin elf18 inducing RNA helicases to rescue translation inhibition by loosening hairpins [30].Although secondary structures were known to induce ribosome pausing, no specific sequence had been identified.Our study discovered a significant motif that is important for ribosome pausing and translational inhibition through the formation of a stem-loop structure.Although its function in ribosome pausing still needs to be verified by experimental investigations, the motif identified in this study provides potential cis-element targets for engineered translational regulation in plants.
Maize Seedling Growth and Treatments
Seeds for the maize (Zea mays) inbred line B73 were surface sterilized with 3% (w/v) sodium hypochlorite for 10 min before being washed with distilled water five times.The disinfected seeds were sown in pots containing dump soil (0-6 mm, PINDSTRUP) and grown in a growth chamber at 27 • C with a humidity of 65% in the dark for 6 days.After 6 days of darkness, the leaves of the etiolated seedlings were cut and placed in liquid nitrogen as the 0 h samples.The remaining seedlings were exposed to light by transferring the pots to a plant culture shelf at 27 • C under a white light intensity of 330 µmol/m 2 /s; leaves were collected at 0.5, 1, 2, and 4 h into light exposure.We took this culture process 3 individual times to reduce operational errors.
Ribo-Seq and RNA-Seq Library Construction
A previously described method was modified to isolate ribosome-protected fragments (RPFs) [8].In detail, 0.5 g (fresh weight) of leaf tissues was ground in liquid nitrogen to a fine powder, to which 2.5 mL polysome extraction buffer (150 mM Tris-HCl, pH 8.0, 40 mM KCl, 20 mM MgCl 2 , 2% [v/v] polyoxyethylene, 0.4% [w/v] sodium deoxycholate, 1.5 mM dithiothreitol, 50 µg/mL chloramphenicol, 50 µg/mL cycloheximide, and 10 units/mL DNase I) was added and kept on ice for 15 min.The slurry was filtered through one layer of Miracloth and centrifuged at 16,000× g for 10 min at 4 • C. Total RNAs were isolated by Tri-reagent (Merck KGaA, Darmstadt, Germany) for RNA-seq.For Ribo-seq, 200 units of RNase I were added to 450 µL of supernatant (25 µg RNA) and incubated at 25 • C for 1 h.Ribosome monomers were isolated with size-exclusion columns (Illustra MicroSpin S-400 HR Columns; GE Healthcare, Chicago, IL, USA).Ribosome-protected RNA fragments were isolated with Tri-reagent.Ribo-seq libraries were constructed based on the described method with some modifications [56].In brief, after preliminary screening by 10% PAGE with 7 M urea, 27 to 34 nt RPFs were recovered, and the phosphate group of the RNAs was treated with T4 PNK, followed by 3 ′ adaptor ligation (5 ′ -rAppGATCGGAAGAGCACACGTCT-NH2) using truncated T4 RNA ligase 2 (NEB).Reverse transcription was performed with SuperScript II (ThermoFisher Scientific, Waltham, MA, USA) and an RT primer (5 ′ -GATCGTCGGACTGTAGAACTCTGAACGTGTAGATCTCGGTGGTCGCCGTATCATT/ iSp18/CACTCA/iSp18/CAGACGTGTGCTCTTCCGATCT).The reverse transcription was performed at 42 • C for 1.5 h, followed by incubation at 70 • C for 10 min.First-strand cDNA for ribosomal RNA (rDNA) was removed from the cDNA by hybridization with rDNA probes (Table S6).All the probes and the primers used in following test were designed by our library and synthesized by Sangon (Shanghai, China).RPFs were circularized with Circligase ssDNA ligase (Epicentre CL4111K) and amplified by PCR (13 cycles, 60 • C annealing, and primer sequences are listed in Table S7).
The Ribo-seq libraries were sequenced as paired-end 150 bp reads or single-end 75 bp reads on an Illumina NovaSeq 6000 instrument at Shanghai PersonalBio Technology (Shanghai, China).The RNA-seq libraries were sequenced on an Illumina NovaSeq 6000 instrument.
Quantitative PCR
First-strand cDNA was synthesized from total RNA samples using SuperScript IV (ThermoFisher scientific) and an oligodT primer.The reverse transcription was performed at 42 • C for 1.5 h, followed by incubation at 70 • C for 10 min.Quantitative PCR (qPCR) was performed with SYBR Green qPCR mixtures (ThermoFisher scientific, Waltham, USA) and gene-specific primers (Table S8).Ubiquitin (Ubi, Zm00001d053838) was selected as an internal control.
TMT-Labeled Mass Spectrometry Analysis
Leaf samples (about 100 mg, fresh weight) were ground into a fine powder in liquid nitrogen and homogenized in 1 mL extraction buffer (0.9 M sucrose, 0.5 M Tris-HCl, 50 mM EDTA, 0.1 M KCl, 1% [v/v] Triton X-100, 2% [v/v] β-mercaptoethanol, and 1% [w/v] protease inhibitor cocktail set VI [Calbiochem], pH 8).To the above mixture, 1 mL of saturated phenol with Tris-HCl (pH 7.5) was added, and the upper phenolic phase was separated from the aqueous phase by centrifugation at 8000× g for 10 min at 4 • C. The upper phase was transferred to a fresh tube, to which five volumes of pre-cooled 0.1 M ammonium acetate in methanol were added and kept at −20 • C overnight.The proteins were pelleted by centrifugation at 10,000× g for 15 min at 4 • C; the pellet was washed with pre-cooled methanol and then acetone.Air-dried pellets were resuspended with 300 µL of lysate solution (6 M urea, 50 mM ammonium bicarbonate, pH 8.0) and incubated for 3 h at room temperature.Protein digestion, TMT labeling, and mass spectrometry were performed by Shanghai Luming Biological Technology (Shanghai, China).All analyses were performed with a Q Exactive HF mass spectrometer (ThermoFisher Scientific, Waltham, MA, USA) equipped with a Nanospray Flex source (ThermoFisher Scientific).Samples were loaded and separated by an Agilent Zorbax Extent C18 column (2.1 × 150 mm, 5 µm) on an Agilent 1100HPLC (ThermoFisher Scientific).
ProteomeDiscoverer 2.4 (ThermoFisher Scientific) was used to search the raw data thoroughly against the Uniprot taxonomy_4577 database.The alkylation of cysteine was considered a fixed modification during the search.For protein quantification, TMT was selected.The global false discovery rate (FDR) was set to 0.01, and protein groups considered for quantification required at least one peptide.
Sequencing Quality Control
Raw RNA-seq and Ribo-seq reads of low quality and adapter sequences were trimmed by fastp (v0.22.0)[57].The clean resulting Ribo-seq reads were subjected to a length cutoff of 23-36 nt, with no undetected base allowed ("N" base number ≤ 0), a minimum quality score > 15, and paired reads correction.Raw RNA-seq reads were cleaned with default parameters.In total, ~81 M clean RNA-seq reads and ~30.5 M Ribo-seq reads were obtained from each replicate.After cleaning, paired reads from the Ribo-seq data were merged into a single read with the tool NGmerge (v0.2) [58] with parameters "-n 4 -p 0.05 -z -g." FastQC (v0.11.9) [59] was used to analyze the quality of the reads, and MultiQC (v1.11) [60] was subsequently used for integrated comparative analysis of all samples.
Feature Read Counts and Differential Expression Analysis for RNA-Seq and Ribo-Seq Reads
For mapped RNA-seq reads, the union mode of HTSeq (v0.13.5) [66] software was used to obtain the number of reads mapping to different structures (exons and CDSs) based on each feature (gene, transcript).Differential expression analysis (filter: |log 2 (fold change)| > 1 and p-value < 0.05) and gene count normalization were processed by DESeq2 (v1.24.0) [67].For Ribo-seq, featureCounts (v2.0.1) was used to calculate RPF counts.Following the identification of differentially abundant RPFs (filter: |log 2 (fold change)| > 1, p-value < 0.05, and no less than an average of five reads in support), analysis was also performed by DESeq2.For all samples, results were obtained as normalized counts, TPM, and FPKM values.MA plots were generated using the R package ggplot2 (v3.3.3)[68].The translation efficiency (TE) level, TE value, and differential TE were calculated with the xtail package (v1.1.5)[32].Ribosome pausing and uORF detection require transcription scale mapping results as an input.Thus, Ribo-seq data were mapped to the reference genome with STAR before merging the results from each replicate for each sample.
Trait Detection of Ribosome-Protected Fraction and uORF Prediction
The distribution of RPFs was plotted with the tool metaplot in the software RiboCode (v1.2.11) [69].The transcription annotation gtf file was converted by the prepare_transcripts tool.Based on the distribution of read counts for each read length, the protection range and the P-site for different read lengths were obtained for uORF and pausing analyses.RiboCode main tool (parameters: "-l no -g -b"; read length: 24-36 nt) was applied to the following analysis, including uORF prediction.The bar plot and pie graph of distribution and trait statistics were also obtained with RiboCode.
Ribosome-Pausing Detection and Analysis
The pausing score and z-score of all pausing scores at the same coverage bins were calculated by the software Pausepred (v5.18.2) [70].The screening cutoff was set with a pausing score > 50, a z-score > 1.65, and read counts in position > 20.To investigate the biased characters related to ribosome pausing, transcript sequences were aligned with the pausing site as the central position.Nucleotide and amino acid distributions were compared between genes whose transcripts showed pausing and 2000 randomly chosen transcripts.To investigate the GC content around the pausing site, we calculated the average GC content on two scales: ±500 absolute base position around the pausing site (RPF binding site at random) and cut each gene from the start codon to the pausing site (RPF binding site at random) and the pausing site to the termination codon into 50 relative bins separately.To identify potential motifs causing the pausing, 250 bp of sequence upstream and downstream (including the pausing site) were separately extracted and subjected to a motif scan.The motifs in ribosome-pausing regions (250 bp on either side of the pausing site) were analyzed with MEME suite (v5.5.2; parameters: -minw 3 -maxw 20 -nmotifs) [71].The significant motifs were then converted to a position weight matrix (PWM) based on the MEME frequency results.To examine the relationship between pausing level and the presence of the motif, the longest transcripts of all genes with the PWM motif were scanned using the matrix-scan tool package in RSAT (parameters: -v 1 -pseudo 1 -decimals 1 -1str -origin end -bgfile longest.fa.8bp.freq-bg_pseudo 0.01 -return limits -return sites -return pval -return rank -return normw -return weight_limits -return bg_residues -lth score 1 -uth pval 1e-4-seq_format fasta -n score) [72].The 8 bp base frequency table of all genes in the reference genome, which was used by the scan process as a background, was generated with the oligo-analysis tool in RSAT (parameter: "-l 8 -quick -v 1 -1str").To determine the variation in pausing across time points, the maximum pausing score of all positions in the pausing transcript was calculated for each time point.The genes were then clustered based on the pausing score of their transcripts with the R package pheatmap [73].
Translation Intensity (TI) Calculation
The TI calculated as follows:
Gene Ontology and KEGG Enrichment Analysis
The transcripts exhibiting significant ribosome-pausing sites were turned into a gene list that was submitted to the Gene Ontology [74] website (https://geneontology.org/,accessed on 16 July 2023) for the analysis of biological processes, molecular function, and cellular components.For the KEGG enrichment analysis, the YuLab-SMU/createKEGGdb package and the command "createKEGGdb::create_kegg_db ('zma')" were used to download and construct the KEGG database.ClusterProfiler [75] was used for detecting enrichment.Both enrichment results are shown as bubble plots, drawn with ggplot2.
Graphical Visualization of Results
The distribution of read counts as a function of read length and sampling time point was plotted with the function geom_bar of the ggplot2 R package.Seqlogo was used to identify the conserved bases and amino acids of the protein being translated around the pausing site, using the ggplot2 ggpubr and ggseqlogo R packages.Pausing sites are locations of bases; amino acids are encoded by three bases; whatever the pausing site on 1, 2, and 3 of the codon, we set that amino acid as a pausing amino acid.The ORF is alone with the genome transcript annotation.Other regular dot plots, bar plots, box plots, and fold line plots were also drawn with ggplot2.Plots with significance markers were performed with the ggsignif package [76].Plots with dot annotation texts were applied with the ggrepel package [77].
Figure 1 .
Figure 1.Genomic distribution, length, and 3 nt periodicity of identified ribosome-protected fragments (RPFs): (A) Distribution of RPFs across different features of the maize genome.Annotated coding sequences (CDSs), upstream open reading frame (uORF), downstream ORF (dORF), overlapped ORF, and novel coding regions are indicated with different colors.The numbers out and in parentheses indicate the gene number and the percentage of total reads, respectively.(B) Meta-gene analysis of the 29-nucleotide (nt) RPFs near the annotated translation start and stop sites in the maize genome.The red, blue, and orange bars represent the three possible open reading frames.E, P, and A indicate the aminoacyl-tRNA entry site, the P-site (peptidyl-tRNA formation site), and the E-site (uncharged tRNA exit site) in ribosomes, respectively.The numbers in the drawn ribosomes indicate the number of nucleotides protected by ribosomes upstream of the start codon and downstream of the stop codon.
Figure 1 .
Figure 1.Genomic distribution, length, and 3 nt periodicity of identified ribosome-protected fragments (RPFs): (A) Distribution of RPFs across different features of the maize genome.Annotated coding sequences (CDSs), upstream open reading frame (uORF), downstream ORF (dORF), overlapped ORF, and novel coding regions are indicated with different colors.The numbers out and in parentheses indicate the gene number and the percentage of total reads, respectively.(B) Meta-gene analysis of the 29-nucleotide (nt) RPFs near the annotated translation start and stop sites in the maize genome.The red, blue, and orange bars represent the three possible open reading frames.E, P, and A indicate the aminoacyl-tRNA entry site, the P-site (peptidyl-tRNA formation site), and the E-site (uncharged tRNA exit site) in ribosomes, respectively.The numbers in the drawn ribosomes indicate the number of nucleotides protected by ribosomes upstream of the start codon and downstream of the stop codon.
Figure 2 .
Figure 2. Translational regulation mainly occurs in the early stages of light exposure.(A-E) Differentially translational regulated genes (DTGs) following different durations of light exposure ((A), 0-0.5 h; (B), 0.5-1 h; (C), 1-2 h; (D), 2-4 h; (E), 0.5-4 h).Upregulated and downregulated DTGs are indicated with red and blue dots, respectively.(F) Venn diagrams showing the extent of overlap between differentially expressed genes (DEGs) and DTGs responsive to light exposure.DTGs and DEGs are indicated by the blue and yellow circles, respectively.The numbers represent the specific and common DEGs and DTGs.
Figure 2 .
Figure 2. Translational regulation mainly occurs in the early stages of light exposure.(A-E) Differentially translational regulated genes (DTGs) following different durations of light exposure ((A), 0-0.5 h; (B), 0.5-1 h; (C), 1-2 h; (D), 2-4 h; (E), 0.5-4 h).Upregulated and downregulated DTGs are indicated with red and blue dots, respectively.(F) Venn diagrams showing the extent of overlap between differentially expressed genes (DEGs) and DTGs responsive to light exposure.DTGs and DEGs are indicated by the blue and yellow circles, respectively.The numbers represent the specific and common DEGs and DTGs.
Figure 4 .
Figure 4. Genome-wide light-responsive ribosome-pausing events in maize-etiolated seedlings: (A) Venn diagram showing the extent of overlap between significant ribosome-pausing events identified at each of the time points of illumination.The numbers indicate pausing events specific to each sample or common to different samples.(B) Clustering analysis of transcripts showing ribosomepausing events defining five clusters.The color scale indicates the strength of ribosome pausing.(C) Gene ontology (GO) term enrichment analysis (biology processes) of the genes whose transcripts belong to one of the five clusters defined above.The size of the circles indicates the number of genes; the color indicates the P value.
Figure 4 .
Figure 4. Genome-wide light-responsive ribosome-pausing events in maize-etiolated seedlings: (A) Venn diagram showing the extent of overlap between significant ribosome-pausing events identified at each of the time points of illumination.The numbers indicate pausing events specific to each sample or common to different samples.(B) Clustering analysis of transcripts showing ribosome-pausing events defining five clusters.The color scale indicates the strength of ribosome pausing.(C) Gene ontology (GO) term enrichment analysis (biology processes) of the genes whose transcripts belong to one of the five clusters defined above.The size of the circles indicates the number of genes; the color indicates the P value.
Figure 5 .
Figure 5. Ribosome pausing negatively regulates translation in maize.(A) The Wilcoxon test was used to assess significant differences in TE for transcripts showing ribosome pausing at different time points of light exposure.Significant mark * for p value < 0.05.(B) Distribution and coverage of RPFs along five randomly chosen transcripts showing ribosome pausing at different time points of light exposure.β-tubulin 6b is a non-pausing control.(C) Translation intensity (TI) for transcripts with high pausing scores at 0 or 2 h into light exposure.Significant mark *** for p value < 0.001.(D) Protein abundance, based on mass spectrometry analysis, translated from transcripts with high pausing scores at 0 or 2 h into light exposure.Significant mark *** for p value < 0.001.
Figure 5 .
Figure 5. Ribosome pausing negatively regulates translation in maize.(A) The Wilcoxon test was used to assess significant differences in TE for transcripts showing ribosome pausing at different time points of light exposure.Significant mark * for p value < 0.05.(B) Distribution and coverage of RPFs along five randomly chosen transcripts showing ribosome pausing at different time points of light exposure.β-tubulin 6b is a non-pausing control.(C) Translation intensity (TI) for transcripts with high pausing scores at 0 or 2 h into light exposure.Significant mark *** for p value < 0.001.(D) Protein abundance, based on mass spectrometry analysis, translated from transcripts with high pausing scores at 0 or 2 h into light exposure.Significant mark *** for p value < 0.001.
Figure 6 .
Figure 6.Sequence features of ribosome-pausing sites: (A) Meta-analysis showing the distribution of ribosome-pausing sites along different regions of maize transcripts.UTR, untranslated region.(B) Sequence logo of the region upstream of ribosome-pausing sites.The height of each letter indicates their probability at the corresponding positions.The positions along the x-axis are relative to the ribosome-pausing sites.The -1 position indicates the upstream 1 nt to the ribosome-pausing site.(C) Metaplot of GC content around ribosome-pausing sites (blue) and random RPFs (green).The GC content is over 500 bp on either side of the ribosome-pausing sites.(D) GC content over fulllength transcripts with ribosome pausing and random transcripts.The lengths of transcripts have been normalized.
Figure 6 .
Figure 6.Sequence features of ribosome-pausing sites: (A) Meta-analysis showing the distribution of ribosome-pausing sites along different regions of maize transcripts.UTR, untranslated region.(B) Sequence logo of the region upstream of ribosome-pausing sites.The height of each letter indicates their probability at the corresponding positions.The positions along the x-axis are relative to the ribosome-pausing sites.The -1 position indicates the upstream 1 nt to the ribosome-pausing site.(C) Metaplot of GC content around ribosome-pausing sites (blue) and random RPFs (green).The GC content is over 500 bp on either side of the ribosome-pausing sites.(D) GC content over fulllength transcripts with ribosome pausing and random transcripts.The lengths of transcripts have been normalized.
Figure 7 .Figure 7 .
Figure 7. (A) A repeated CGC motif appears in the region downstream of ribosome-pausing sites.The height of each letter indicates the probability at the corresponding position.The positions along Figure 7. (A) A repeated CGC motif appears in the region downstream of ribosome-pausing sites.The height of each letter indicates the probability at the corresponding position.The positions along the x-axis are the length of the motif.(B) Maximum ribosome-pausing scores between transcripts with or without CGC motifs in all ribosome-paused transcripts.Significant mark *** for p value < 0.001.(C) Predicted secondary structure of the CGC motif.The possibilities of base-pairing are indicated as a gradient from blue (0%) to red (100%).(D) GO term enrichment analysis of genes whose transcripts contain the CGC motif.
|
v3-fos-license
|
2020-01-23T09:09:56.015Z
|
2020-01-01T00:00:00.000
|
214153346
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/rbent/v64n1/1806-9665-rbent-64-1-e201947.pdf",
"pdf_hash": "b0faed40fc9a643b37eefacc8d96b7cc13c8bb47",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46536",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"sha1": "a882a767fe446e3ed421db48d1a2c2ef2fd33d0e",
"year": 2020
}
|
pes2o/s2orc
|
Molecular characterization of Bacillus thuringiensis strains to control Spodoptera eridania (Cramer) (Lepidoptera: Noctuidae) population
S T R A C T The main objective of this study was to characterize the toxicity and genetic divergence of 18 Bacillus thuringiensis strains in the biological control of Spodoptera eridania. Bacterial suspensions were added to the S. eridania diet. Half of the selected B. thuringiensis strains caused high mortality seven days after infection. The genetic divergence of B. thuringiensis strains was assessed based on Enterobacterial Repetitive Intergenic Consensus (ERIC) and Repetitive Extragenic Palindromic (REP) sequences, and five phylogenetic groups were formed. Despite their genetic diversity B. thuringiensis strains did not show any correlation between the collection sites and toxicity to larvae. Some B. thuringiensis strains are highly toxic to S. eridania, thus highlighting the
Introduction
During the year 2017/18 the total grain yield in Brazil was estimated to be 229.53 million tons, and trends indicate further increase in the upcoming (CONAB, 2018). However, such expectation of agricultural growth may not be achieved due to the emergence of phytosanitary issues responsible for causing crop injuries in many agricultural regions. Herbivorous insects are said to be responsible for destroying one fifth of the world's total crop production annually (Sallam and Bothe, 1999).
In the past, the chemical control was the most common method employed to overcome pest spreading on crops, but over the last two decades natural approaches emerged as alternative methods to insect control. Bacillus thuringiensis (B. thuringiensis) (Berliner, 1911) is a useful tool that naturally controls insects, and it is a Gram-positive bacteria that during the sporulation stage produces protein crystalline inclusions, called Cry proteins, which have selective insecticidal activity against different groups of insects (Yamamoto and Dean, 2000). After B. thuringiensis was employed as a biological control agent, the use of chemical products has progressively reduced and, as well as the environmental pollution caused by its toxic residues (James, 2015). Currently, B. thuringiensis sub-species represent about 98% of formulated sprayable bacterial microbial pesticides (Lacey et al., 2015). Nevertheless, it still represents only 2% of the global pesticide market (Bravo et al., 2011).
B. thuringiensis proteins comprise the synthesis of some endotoxins such as Crystal proteins (cry), Cytolitic (cyt) proteins, Vegetative insecticide proteins (vip) and thuringiensine (β-exotoxin) (Bravo et al., 2007). In spite of the presence of four endotoxins types, Cry proteins have been widely considered the major crystal component characterizing B. thuringiensis strains (Crickmore et al., 1998), mainly because of the specific toxic potential these proteins may exhibit towards insect control.
Molecular characterization of new B. thuringiensis genes is important due to of its specific mode of action in target insects. Approximately 770 cry genes were already sequenced, cataloged and qualified according to gene similarity analysis (Jouzani et al., 2017). These genes are updated and listed in the website: http://www.lifesci.sussex.ac.uk/home/ Neil_Crickmore/Bt/intro.html. Polymerase Chain Reaction (PCR) is the most used method to characterize B. thuringiensis genes (Fané et al., 2017). Genetic diversity has focused on Enterobacterial Repetitive Intergenic Consensus (ERIC) elements and Repetitive Extragenic Palindromic (REP) sequences, also via PCR (Mishra et al., 2017). Repetitive Elements Polymorphism (REP-PCR) fingerprinting is commonly used to discriminate bacteria species analyzing the distribution of repetitive DNA sequences in several prokaryotic genome (Versalovic et al., 1991). This methodology uses specific primer sets for recognition of repetitive sequences that show inter-repetitive distances but also specific patterns among bacteria species and strains (Van Belkum et al., 1998), indicating that it represents a rapid shortcut for addressing the genetic relationship of unknown strains with the major known serovars (Cherif et al., 2007). Likewise, ERIC-PCR involves the use of primers composed of 22 nucleotides displaying high homology for repetitive intergenic sequences commonly present in all prokaryotic kingdoms (Versalovic et al., 1991).
Spodoptera eridania (Cramer) (Lepidoptera: Noctuidae) causes significant losses to soybean and cotton crops (Silvie et al., 2013). It is responsible for injuries to pods and leaves, leaf-loss, and reduction on yield and plant growth (Bernardi et al., 2014). Currently, the most widely used method for Spodoptera spp. control consists in the use of chemical pesticides such as: phosphate, carbamate, pyrethroid and growth regulator (CABI, 2018). B. thuringiensis may be an alternative to control this insect pest.
The main objective of this work was to characterize the toxicity towards S. eridania and the genetic diversity of 18 B. thuringiensis strains based on ERIC and REP sequences, and identifies possible grouping related strains.
B. thuringiensis strains
The study was carried out at the Laboratory of Biological Control, Maize and Sorghum, in Sete Lagoas, MG, Brazil. All strains have been previously tested against fall armyworm, S. frugiperda (Lepidoptera: Noctuidae J.E. Smith) (Valicente and Barreto, 2003;Valicente and Fonseca, 2010). A total of 18 B. thuringiensis strains were randomly selected from the B. thuringiensis collection and used in following experiments ( Table 1).
The strains were grown in solid Luria-Bertani (LB) medium at 28ºC + 2 o C and the pH was adjusted to 7.5. After 72 hours, five colonies were chosen from each strain and inoculated into individual Petri dishes with solid LB medium. A loopful of each strain from two Petri dishes was inoculated into 1 mL of sterilized distilled water for DNA extraction, and the three remaining Petri dishes were used in bioassays.
Insect feeding bioassays
The bioassay was carried out with neonate S. eridania larvae fed on artificial diet (Bowling, 1967), with 120 µL of B. thuringiensis suspension at the concentration of 10 8 spores/mL. Check treatment used artificial diet, and water experimental protocol was completely randomized block design consisting of 19 treatments with 4 replicates. Bioassay was composed with 4 caterpillars per replicate. Each neonate larvae were maintained individually in artificial diet plastic containers with acrylic lids. Mortality was evaluated after 7 days based on the average of surviving caterpillars. Mortality rate was calculated using the number of dead caterpillars/ (number of surviving caterpillar x 100).
The toxicity results against neonate larvae were submitted to analysis of variance (ANOVA) in the Assistant software and the means compared by Scott-Knott test with 1% of significance (p < 0.01, F-48,27, F-crit 2.26).
DNA extraction and PCR conditions
DNA extraction of the 18 B. thuringiensis isolates was performed according to the Wizard Genomic DNA Purification Kit (Promega , Madison, WI) procedure. DNA samples were quantified in ND-1000 UV/VIS spectrophotometer (NanoDrop Technologies, EUA), and diluted to a concentration of 50 ng/µL and stored at -2°C.
Binary matrices were generated using the amplification products used as input data into Bionumerics software (Applied Maths, Belgium) and after the Pearson's correlation analysis. A similarity matrix was calculated from binary data using the Dice similarity coefficient. Clustering analysis was performed using this coefficient and the UPGMA (Unweighted Pair-Group Mean Average) with bootstrap of 1000 replicates to evaluate the consistency of the group. Finally, Bionumerics produced both the similarity matrix and dendrogram containing the 18 B. thuringiensis isolates.
Molecular analysis
The pattern of REP-PCR and ERIC-PCR polymorphic bands was individually identified according to PCR product-specific migration profile after electrophoresis agarose gel. Electrophoresis profile of the PCR products amplified with ERIC primers (Figure 2A) exhibited the presence of 6 to 10 fragments which ranged from 100 bp to 1,500 bp, while the REP primers ( Figure 2B) amplified 1 to 10 fragments per strain with sizes ranging between 50 and 2,000 bp.
A dendrogram was constructed through Pearson correlation for the genetic diversity analysis of ERIC and REP sequences with a 50% similarity cut-point. As a result, five groups were separated for this cut-point ( Figure 3). The four strains clustered in group I induced mortality ranging from 9.37% (1033B) to 23.44% (1089). These strains were isolated from Boa Esperança (1033B, 1089 and 1093) and Sacramento (986J) samples, cities from the state of Minas Gerais. The mortality caused by strains clustered in group II ranged from 11.25% (1043NV) to 61.54% (1042B), and with exception of strain 1394, isolated in Pernambuco state, the others are from Coqueiral (1042B and 1043NV) and Teixeiras (939F), cities also located in Minas Gerais.
Group III is represented only for the T09 strain isolated in France, which caused 100% mortality on S. eridania larvae. The group IV was composed of six strains, and excluding the isolate 976D, obtained in Uberaba, Minas Gerais state, the other isolates (939FB, 970C, 1058G, 813A, and 939FD), also from cities in Minas Gerais, caused mortality rates superior to 90% upon larvae fed on artificial diets. The group V was clustered by the strains 7B8, 1039C2 and 1058A, isolated in the regions of Limoeiro (Alagoas state), and Coqueiral and Guapé (Minas Gerais), respectively. Similar to the strain T09 and the isolates in group IV, the strains at the group V were responsible for high mortality rates on larvae, with values vary from 85.94 to 93.65%.
Discussion
According to our results, half of the B. thuringiensis strains tested were able to cause death to S. eridania larvae. It shows the high potential some specific B. thuringiensis strains have as biological control agents for pest insects like Spodoptera species, especially S. eridania in our case. Constanski et al. (2015) found that 3 strains exhibited toxicity higher than 90% against S. eridania and S. frugiperda. Similarly, dos Santos et al. (2009) identified among 100 B. thuringiensis strains some with a toxicity higher than 70% against S. eridania, S. cosmioides and S. frugiperda larvae. also found many strains harboring different cry1 genes that caused 100% mortality in S. frugiperda neonate larvae. Monnerat et al. (2007) report the toxicity of a B. thuringiensis collection with 1,400 isolates against S. frugiperda, Anticarsia gemmantalis and Plutella xylostella. Twenty-seven B. thuringiensis isolates caused 100% mortality in S. frugiperda, A. gemmatalis and P. xylostella larvae. Fatoretto et al. (2007) also reported a high mortality rate in S. frugiperda larvae caused by 30% out of 115 B. thuringiensis strains. Huang et al. (2018) verified that the CAB109 strain caused mortality up to 55% in S. exigua larvae fed on diets containing B. thuringiensis suspension, and this strain also influenced the growth of S. exigua larvae in all instars. Valicente and Fonseca (2010) detected mortality around 95.8% caused by T09 strain against Spodoptera populations. Praça et al. (2004) found that in a group of 300 B. thuringiensis strains tested against 5 insect species, including Spodoptera sp., only 2 strains caused the death of all insects.
Researches based on the development or even identification of alternative methods for reducing the environmental impacts caused by conventional chemical compounds have increased in the scientific community (Glare and O'Callaghan, 2000). Considering the high toxicity induced by some B. thuringiensis isolates to different pest insects, make use of these bacteria as a biotechnological tool for pest control can be very beneficial to agriculture worldwide. Höfte and Whiteley (1989) explain that B. thuringiensis toxicity may be associated with the different shapes acquired by bacteria-produced parasporal inclusions. Moreover, Figure 1 Mortality of S. eridania caterpillars caused by different B. thuringiensis (Bt) strains with a concentration of 10 8 spores/mL after 7 days of bioassay. Means followed by the same letter did not differ statistically by Scott-Knott test at 1% significance (p < 0.01). variations observed in the susceptibility of Spodoptera populations to B. thuringiensis-produced toxins cannot be attributed only to the structural shape of endotoxins but also to genetic diversity found in B. thuringiensis communities (Hernández-Martínez et al., 2008).
Just a few papers have been published using REP-PCR to study the genetic diversity of B. thuringiensis isolates. Usually, REP and ERIC sequences are used for genetic diversity analysis of many organisms including bacteria (Ahmadi et al., 2018;Katara et al., 2012;Mishra et al., 2017), fish (Fernández-Álvarez et al., 2018), and plants (Rampadarath et al., 2015). REP and ERIC primers have been useful in distinguishing B. thuringiensis isolates found in different locations (Katara et al., 2012) and roots of various legumes (Mishra et al., 2017). Based on the similarity for individuals distributed in the same group, it was possible to see a small correlation between B. thuringiensis strain-induced mortality and collection place, even regarding some strains originally isolated at a particular region being clustered together after the genetic diversity analysis (Figure 3).
In an attempt to link the subspecies according to collection places and toxicity against Spodoptera larvae, Silva and Valicente (2013) analyzed 65 B. thuringiensis strains by using REP, ERIC and BOX, and it resulted in 55 fragments amplified in 10 population groups. B. thuringiensis isolates found in Goiás state showed a high similarity themselves when compared to isolates found in other Brazilian states, which showed a larger genetic distance. As a consequence, the correlation between the collection places and toxicity was influenced by this genetic divergence for the last group. Katara et al. (2012) report that despite REP-PCR and ERIC-PCR did not generate similar fragment patterns among 113 strains found at the same location in India, such techniques may still be useful for distinguishing B. thuringiensis strains. A high genetic diversity was also observed among B. thuringiensis isolates present in different regions of Mexico, in which 39 fingerprints were identified in 40 B. thuringiensis isolates using ERIC-PCR (García et al., 2015). Vilas-Bôas and Lemos (2004) observed a high genetic diversity of 218 B. thuringiensis isolates found in Brazil.
Our results suggest that the genetic diversity of B. thuringiensis may be suffering influence of both ecological factors and geographic distribution of strains which have likely gone through a process of adaptation to different habitats. Such genetic variability is a very important characteristic of B. thuringiensis strains as it enables bacteria to adapt to several environments (Galvis and Moreno, 2014). The insecticidal activity of the isolates indicates these strains could be considered as potential biological agent candidates for further bioassays, with future perspectives to apply the best-performance strains as bioproducts in areas threatened by S. eridania attacks. Silva and Valicente (2013) also characterized the genetic diversity of 65 B. thuringiensis strains. Their results showed 1 to 4 fragments obtained for strains characterized by the REP primers. The fragment sizes ranged from 396 to 3.054 bp. By using ERIC primers, the number of fragments varied 1 to 9 with sizes ranging from 220 to 2.036 bp.
A recent study showed the molecular diversity of endophytic B. thuringiensis isolates from root nodules of legume plants using ERIC-PCR. Authors concluded that B. thuringiensis diversity may be related to different factors such as the host plant genotype, region weather, and soil conditions, including the soil microbial communities. Additionally, bacteria can be transported by previously contaminated sources such as air dust, rainfall, and B. thuringiensis toxin-killed insect's cadavers, which may be ingested by other living insects and animals capable of scattering B. thuringiensis through feces (Mishra et al., 2017).
Absence or presence of gene similarity could be associated not only to strain toxicity or collection places but also to other factors like the occurrence of cry, cyt and vip genes or even β-exotoxins. Future works will be carried out aiming to detect Cry, Cyt and Vip toxins on previously tested strains under in vitro conditions in order to check if the presence of these genes might be a factor correlated to strain genetic similarity. The identification of β-exotoxin, known for being widely toxic to many species, is also extremely relevant to proceed on works focused on this approach, as its absence would allow determining the feasibility of the putative strain as a new biological control agent followed by its insertion into the Integrated Pest Management (IPM).
Figure 3
Dendrogram and matrix similarity produced by software Bionumerics using agarose gel image as input data and a bootstrap of 1,000 replicates to estimate strains distribution. Data construction was supported by Pearson's correlation between ERIC and REP sequences, UPGMA cluster analysis and Dice similarity coefficient test.
REP and ERIC primers are useful for characterizing B. thuringiensis isolates. Furthermore, fingerprinting techniques used for bacterial population study are considered advantageous due to the simplicity, detection capacity of a wide range of sequences and production of consistent results. In our work, ERIC-PCR method was more informative than REP-PCR. Additionally, B. thuringiensis isolates collected in different habitats exhibited certain genetic diversity degree. Although there was just a small correlation between B. thuringiensis-collection place and toxicity level, some isolates were highly entomopathogenic for S. eridania larvae. This work will contribute to S. eridania biological control once some strains with high toxicity level could be employed in B. thuringiensis-based formulations, but also used as sources for prospecting further protein-expressing genes specifically toxic to lepidopterans.
Conflicts of Interest
The authors declare no conflicts of interest. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Author contribution statement
This work was carried out in collaboration between all authors. Déborah Heloísa Bittencourt Machado, Kalynka Gabriella do Livramento, Wesley Pires Flausino Máximo and Bárbara França Negri designed the study, performed the statistical analysis, and wrote the first draft of the manuscript. Luciano Vilela Paiva and Fernando Hercos Valicente managed the literature searches. All authors read and approved the final version of the manuscript.
|
v3-fos-license
|
2016-09-26T11:17:33.744Z
|
2017-03-22T00:00:00.000
|
6658645
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00192-017-3304-9.pdf",
"pdf_hash": "7445bef3d8f7c56291c093063045d0cf5ad64513",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46537",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "158dbda6663db6db5dd40b5322b8f495ca3eb16c",
"year": 2017
}
|
pes2o/s2orc
|
Choice of mode of delivery in a subsequent pregnancy after OASI: a survey among Dutch gynecologists
Introduction and hypothesis National and international guidelines do not provide clear recommendations on the mode of delivery in a subsequent pregnancy after obstetric anal sphincter injury (OASI). The aim of this study was to investigate the opinion of gynecologists in The Netherlands on this choice and the extent to which this choice is affected by the gynecologist’s characteristics. Methods Of 973 gynecologists sent a questionnaire seeking their opinion on the mode of delivery in 16 different case descriptions, 234 (24%) responded. Factors influencing the opinion of the respondents on the mode of delivery, the presence of anal symptoms, the degree of OASI and the characteristics of the respondents were analyzed by univariate and multivariate logistic regression analysis. Results Recommendations on the mode of delivery in a subsequent pregnancy after OASI showed considerable variation. The recommendations depended on (previous) symptoms and the degree of OASI. For gynecologists who based their recommendations on endoanal ultrasonography outcomes (7–20% depending on the case), the degree of OASI and severity of (previous) symptoms were less important. Gynecologists basing their recommendations on endoanal ultrasonography recommended a primary cesarean section less often. Gynecologist’s characteristics (including years of experience, type of hospital and subspecialty) had a small effect on their recommendations on the mode of delivery. Conclusions Due to lack of evidence, recommendations of gynecologists in The Netherlands on the mode of delivery in a subsequent pregnancy after OASI vary widely and depend on (previous) symptoms and the degree of OASI. Gynecologists who based their recommendations on endoanal ultrasonography outcomes recommended cesarean section less often.
Introduction
From the literature there is no clear evidence-based recommendation that gynecologists can give to women who have sustained obstetric anal sphincter injury (OASI) on the mode of delivery in a subsequent pregnancy. This may be due to a lack of randomized controlled trials evaluating the best mode of delivery after OASI. The information available on outcomes in women with subsequent delivery after OASI is derived mainly from small, retrospective, observational studies. These studies have shown that the rate of fecal incontinence after subsequent vaginal delivery is 7% to 10% in women with a previous OASI without fecal incontinence. This rate rises to 17-40% among women with fecal incontinence after OASI [1][2][3][4]. Whether and to what extent elective cesarean section reduces the risk is unknown. Two recently published cohort studies have shown that the mode of second delivery does not significantly affect the risk of long-term anal or fecal incontinence, and concluded that vaginal delivery following OASI is safe in appropriately selected women [5,6].
The guidelines on OASI of both the Dutch Society of Obstetrics and Gynecology (NVOG) [7] and the UK Royal College of Obstetricians and Gynecologists (RCOG) [8] recommend that the available evidence be discussed with the woman during counseling on the mode of delivery. They differ, however, with respect to the use of manometry and endoanal ultrasonography. The guidelines of the NVOG recommend that a cesarean section should not be performed just because of OASI in a previous delivery, but that this should be considered only in women with symptoms of compromised sphincter function after OASI. In contrast, the RCOG guidelines recommend that an elective cesarean section should be offered to women with symptoms of compromised sphincter function and to asymptomatic women with abnormal anorectal manometric or endoanal ultrasonographic features. The RCOG states that between 17% and 24% of women develop new or more serious complaints of anal incontinence after a second vaginal delivery [8]. The NVOG guidelines do not mention anorectal manometry or endoanal ultrasonography.
Due to the lack of high-quality evidence and clear advice in these guidelines, care-givers are left to use their best clinical judgement during counseling the woman about the best mode of delivery in subsequent deliveries. The aim of this study was to investigate the opinion of Dutch gynecologists on the best choice of mode of delivery in a subsequent pregnancy after OASI. The effect of gynecologist's characteristics, degree of OASI, symptoms of OASI and diagnostic outcomes (ultrasonography) were assessed.
Materials and methods
All 973 gynecologists registered in the database of the Dutch Society of Obstetrics and Gynecology were sent an online questionnaire by email between September and December 2014. All respondents included in this study worked in a general hospital and in most of them practiced obstetrics as part of their job. Physicians working in subspecialist centers, exclusively practicing general gynecology without obstetrics were not surveyed. Reminders were sent within 2 weeks to all gynecologists.
The questionnaire was developed by four gynecologists: one general gynecologist (teaching hospital), two urogynecologists (academic) and one perinatologist (academic). Prior to mass distribution, the survey was tested by five gynecologists who took on average 15 min to complete the online questionnaire. The questionnaire was divided into two parts. The first part contained questions on general characteristics of the respondent: gender, number of years of experience as a gynecologist, type of hospital, number of gynecologists in the respondent's department and subspecialty. The second part comprised 16 case descriptions. Cases 1 to 4 described a patient without a complaint of OASI with the four different degrees of OASI as described by Sultan (grade 3A, 3B, 3C and 4 [9]. Cases 5 to 8 described a patient with transient symptoms of anal incontinence, cases 9 to 12 described a patient with persistent flatal incontinence, and cases 13 to 16 described a patient with persistent fecal incontinence. These cases were also repeated for the four different degrees of OASI. For each case description the respondent had to give a personal recommendation regarding the mode of delivery in a subsequent pregnancy: vaginal delivery, primary cesarean section or an approach depending on the outcome of endoanal ultrasonography.
The electronic questionnaire was created with the use of SurveyMonkey, a cloud-based online survey and questionnaire tool. All respondents completed the questionnaire in the SurveyMonkey database created for this purpose. All data were analyzed using SPSS version 22. The characteristics of the participating gynecologists and their recommendations on mode of delivery are presented as percentages. The effect of degree of OASI and the symptoms on the recommended mode of delivery are plotted as percentages. The associations between the characteristics of the gynecologists and the recommendation given were analyzed by univariate and multivariate logistic regression analysis. This study was approved by the medical ethics committee of the University Hospital Maastricht, Maastricht University.
Results
Off the 973 gynecologists, 234 responded (24%). The characteristics of the respondents are shown in Table 1. Of the respondents, 59% were female and 36% had more than 15 years of experience as a gynecologist. Of all the respondents, 93% had a subspecialty, and of these, 36% were perinatologists and 31% were urogynecologists.
The recommendations on mode of delivery in a subsequent pregnancy are shown for the 16 case descriptions in Table 2. In women without symptoms after a grade 3A perineal tear, almost 91% of gynecologists would recommend a vaginal delivery in a subsequent pregnancy. But in women with a history of a grade 4 perineal tear with persisting symptoms of fecal incontinence, almost 84% of gynecologists would recommend a cesarean section. An increasing proportion of gynecologists would recommend a cesarean section with increasing complaints of anal incontinence and increasing degree of OASI. The percentage of gynecologists who would recommend a vaginal delivery decreased from 74.3% to 10.5% with more extensive symptoms (Fig. 1). The percentage of gynecologists who would recommend a vaginal delivery decreased from 49% to 25% with increasing extent of OASI (Fig. 2). Depending on the case description, 7-20% of gynecologists would base their recommendation on endoanal ultrasonography findings. These percentages did not decrease or increase with increasing extent of OASI or with more extensive symptoms. So for gynecologists who based their recommendation on endoanal ultrasonography findings, the degree of OASI and severity of (previous) symptoms of anal incontinence had less effect on their advice.
The characteristics of the gynecologists appeared to have an effect on their recommendation on mode of delivery. After accounting for grade of perineal tear and symptoms, the multivariate analysis showed that more years of experience, working in a nonteaching hospital and perinatologist as subspecialty were characteristics independently related to the recommendation for a vaginal delivery after an OASI (Table 3).
Discussion
The recommendation on mode of delivery in a subsequent pregnancy varied considerably among gynecologists in The Netherlands and depended on (previous) symptoms and the degree of OASI. More cesarean sections were recommended in women with higher degrees of OASI and with persistent symptoms. However, both Dutch and UK guidelines do not recommend an approach that depends on the degree of OASI. There is no evidence that higher grades of OASI are associated with a higher risk of recurrence or a higher risk of anal incontinence after a subsequent delivery. In our study Dutch gynecologists considered the degree of OASI after a previous delivery as an important factor in the decision on mode of delivery in a subsequent pregnancy.
Gynecologists who based their recommendation on ultrasonography outcomes considered that the degree of perineal tear was less important than did gynecologists who did not use ultrasonography. We speculate that this is due to the fact that these gynecologists rely more on the outcome of the endoanal ultrasonography as a predictive factor than on the previously described degree of OASI. In an endoanal ultrasonographybased prospective study, Oude Lohuis and Everhardt found that the number of defecatory symptoms had a positive correlation with persistent injury [10]. Especially in women with higher degrees of OASI, a cesarean section was often recommended without ultrasonographic information available. In our study with limited information about clinical scenarios, the information from endoanal ultrasonography in women after OASI may more often lead to a trial of labor after OASI. Transient symptoms and persistent symptoms of anal incontinence are regarded as relevant by all gynecologists when deciding on the mode of delivery in future pregnancies after OASI. Most gynecologists consider symptoms as a sign of a compromised sphincter function, leading to a recommendation in favor of cesarean section. There was a considerable difference in the recommendations in women with persistent flatal incontinence and women with transient symptoms of anal incontinence, although both conditions may be considered as (transient) compromised function of the sphincter. In women with transient symptoms of anal incontinence more gynecologists would recommend a vaginal delivery than a primary cesarean section, and in women with persistent flatal incontinence more gynecologists would recommend a primary cesarean section than a vaginal delivery. The appreciation of the clinical relevance of these conditions seem to differ and is thus not well defined.
A weakness of this study was the low response rate of 24%. We did send the questionnaire out to a large group of gynecologists. This low response rate may partly be explained by the general reminder which was send out. The gynecologists approached did not receive a personal reminder, but only a mass distributed reminder to all gynecologists. They may have felt less personally obliged to participate in the study because of the mass distributed reminder. According to age, distribution in the country and the distribution of subspecialists, our The Netherlands was 43% to 57% [11] compared with 41% to 59% in our study. In The Netherlands 24% of gynecologists work in a university hospital, 44% in a teaching hospital and 32% in a nonteaching general hospital [11]. In this study 15% of respondents worked in a university hospital, 50% in a teaching hospital and 35% in a nonteaching hospital. Of all participating gynecologists, 93% stated that they had a subspecialty, and of these 36% were perinatologists and 31% were urogynecologists. These percentages do not reflect the distribution of all Dutch gynecologists. There was some overrepresentation of urogynecologists, who were probably more interested in filling out a questionnaire on this topic. This may have led to a higher percentage of recommended primary cesarean sections.
Furthermore, in this study we did not record real-life recommendations on mode of delivery, but described different cases in a questionnaire. The responses may differ from real-life situations where patient preferences are also considered and the eventual decision is a result of shared decision making. However, in the process of shared decision making the opinion of the gynecologist plays an important role. This study explores this opinion without taking patient preferences into account. Because we did not record real-life recommendations on the use of endoanal ultrasonography, it is possible that some respondents answered the questions on endoanal ultrasonography theoretically without ever using endoanal ultrasonography data clinically. These answers may also differ from real-life situations. In The Netherlands endoanal ultrasonographic skills and equipment are not readily available in most obstetric practices. In most larger hospitals in The Netherlands it is possible to refer a patient for endoanal ultrasonography and manometry. It is possible that some participating gynecologists who have no access to endoanal ultrasonography responded more negatively, reflecting the lack of availability rather than the theoretically desired information given by endoanal ultrasonography.
In conclusion, the recommendations of Dutch gynecologists on the mode of delivery in a subsequent pregnancy varied considerably and depended not just on (transient) symptoms but also on the degree of OASI. Furthermore, the recommendation for a vaginal delivery were found to be independently associated with more years of experience, working in a nonteaching hospital and perinatology as subspecialty. Among gynecologists who based their recommendation on ultrasonography findings, the degree of OASI and severity of (previous) symptoms were less important and this group of respondents recommended primary cesarean sections less often. However, more robust evidence is required to identify the additional value of endoanal ultrasonography with regard to better outcomes.
|
v3-fos-license
|
2023-09-16T06:58:37.185Z
|
2023-03-01T00:00:00.000
|
261885599
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "10a84ae59c87e619dfb3933353f20cd6a269f4da",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46540",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "10a84ae59c87e619dfb3933353f20cd6a269f4da",
"year": 2023
}
|
pes2o/s2orc
|
Do nurses suffer from insomnia during the Covid-19 pandemic? a cross-sectional study led in Morocco
Introduction Nurses are one of the pillars of the health system, their constant presence with the patients requires a sequence of shifts and nights in the hospital, this aspect has been accentuated during the new pandemic, and undoubtedly impacts their sleep. Objectives We propose to study in this paper the effect of on-call duty on the quality of sleep of nurses. Methods We used a questionnaire made of two parts, we managed to explore in the first par sociodemographic status of our nurses, the second part was the French version of ISI (Insomnia Severity Index) exploring insomnia, satisfaction of sleep and their functioning. Results Regarding descriptive statistiques, from our 90 results, the mean age was 30,9 +/- 6.63, women were equal to men in this study 5% had depressive disorder and 2% anxious disorder, in this study: 68,9% had insomnia 2,5% of them has severe insomnia. Conclusions Indeed, insomnia, the satisfaction regarding sleep amongst nurses and there day to day functioning was altered due to recent pandemic. Disclosure of Interest None Declared
Introduction: Covid-19 is believed to be one of the most impactful events of the 21's century, Pressure related to this pandemic was put on every of the health system especially residents.Medical residents whose hierarchical position is particular, in the framework of their training they are subjected to an increased level of stress due to the constant pressure of training and the current challenges of being in the front line of the pandemic.Objectives: The aim of our study is to evaluate the presence of stress in medical residents.Methods: Using a self-evaluation questionnaire with two parts, the first exploring age, sexe, history of medical, surgical and psychiatric disorders the second part exploring stress with the French version of PSS-10 (preveived stress scale).Results: Concerning our descriptive statistcs: among our 140 residents, percentage of male and female residents were almost equal with 2,85% of them already had a record of an anxiety disorder's follow-up, 71,4% had a moderate stress level and 8,6% had high stress level.Conclusions: Our study led us to the following conclusion, stress is a component that affects the quality and the work performed by the vast majority of health care workers.
EPV0333
Do nurses suffer from insomnia during the Covid-19 pandemic?a cross-sectional study led in Morocco Introduction: Nurses are one of the pillars of the health system, their constant presence with the patients requires a sequence of shifts and nights in the hospital, this aspect has been accentuated during the new pandemic, and undoubtedly impacts their sleep.Objectives: We propose to study in this paper the effect of on-call duty on the quality of sleep of nurses.Methods: We used a questionnaire made of two parts, we managed to explore in the first par sociodemographic status of our nurses, the second part was the French version of ISI (Insomnia Severity Index) exploring insomnia, satisfaction of sleep and their functioning.Results: Regarding descriptive statistiques, from our 90 results, the mean age was 30,9 þ/-6.63,women were equal to men in this study 5% had depressive disorder and 2% anxious disorder, in this study: 68,9% had insomnia 2,5% of them has severe insomnia.Conclusions: Indeed, insomnia, the satisfaction regarding sleep amongst nurses and there day to day functioning was altered due to recent pandemic.
EPV0334
Vaccination against SARSCoV-19 among psychiatric patients at the central Greek hospital Introduction: Vaccination against SARSCov-19 all over Europe reached over 80% of adult population confronting the pandemic burden on National Health Systems.On the contrary large parts of population remained unvaccinated.These groups are mainly individuals with poor socioeconomic status and psychiatric patients Objectives: to determine the ratio of fully vaccinated patients among the hospitalized and outpatient of Psychiatric Hospital of Attika.The reason of vaccination avoidance recorded by the clinician Methods: The study has done retrospectively and included 2583 psychiatric patients who are hospitalized or are visiting the Outpatient clinic.A concise questionnaire was formed to record the main reason of avoidance (Denial/Medical Issues/ Loss of follow up/ other) Results: 520 out of 2583 (21%) remained not fully vaccinated throughout the pandemic and denial by the patient was the main reason (55%).The reasons recorded at the patient's file by the physician are shown at table 1.
Conclusions: Psychiatric patients belong to a high probability group for vaccine avoidance.In our study the frequency of European Psychiatry S795
Table 1
main reasons of vaccine avoidance.
|
v3-fos-license
|
2023-09-24T16:12:50.047Z
|
2023-09-18T00:00:00.000
|
262204433
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4433/14/9/1453/pdf?version=1695112330",
"pdf_hash": "633736377a55fdbfbeaed8e60625c460e0693c66",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46541",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "5e5280be660fc2bd143289d356c1f4fe1033a839",
"year": 2023
}
|
pes2o/s2orc
|
Spatial and Temporal Evolution Characteristics of Water Conservation in the Three-Rivers Headwater Region and the Driving Factors over the Past 30 Years
: The Three-Rivers Headwater Region (TRHR), located in the hinterland of the Tibetan Plat-eau, serves as the “Water Tower of China”, providing vital water conservation (WC) services. Understanding the variations in WC is crucial for locally tailored efforts to adapt to climate change. This study improves the Integrated Valuation of Ecosystem Services and Trade-offs (InVEST) water yield model by integrating long-term time series of vegetation data, emphasizing the role of inter-annual vegetation variation. This study also analyzes the influences of various factors on WC variations. The results show a significant increase in WC from 1991 to 2020 (1.4 mm/yr, p < 0.05), with 78.17% of the TRHR showing improvement. Precipitation is the primary factor driving the interan-nual variations in WC. Moreover, distinct interactions play dominant roles in WC across different eco-geographical regions. In the north-central and western areas, the interaction between annual precipitation and potential evapotranspiration has the highest influence. Conversely, the interaction between annual precipitation and vegetation has the greatest impact in the eastern and central-southern areas. This study provides valuable insights into the complex interactions between the land and atmosphere of the TRHR, which are crucial for enhancing the stability of the ecosystem.
Introduction
The Three-Rivers Headwater Region (TRHR), located in the hinterland of the Tibetan Plateau, plays a key role in ensuring ecosystem security [1].Renowned as the "Water Tower of China", the TRHR is the birthplace of the Yellow River, Yang e River, and Lancang River and is essential for water storage and conservation [2].Water conservation (WC) is a crucial ecosystem regulation service, encompassing the interception and maintenance of precipitation by various components of the ecosystem, including vegetation canopy, li er, soil, lakes, water reservoirs, and more [3].The WC services of the TRHR satisfy the water needs of the local ecosystem and provide water resources for external and downstream areas [4,5].However, the TRHR experiences an uneven and relatively low precipitation distribution.Rising temperatures also contribute to the degradation of frozen soil and increased evapotranspiration, which have a negative impact on WC [6,7].Consequently, accurately assessing the WC in the TRHR and determining its driving factors is crucial to understanding of the ecosystem's ability to maintain the stability and sustainability of water resources.
Previous studies have revealed an increasing trend in WC within the TRHR since the 1990s [8][9][10].The increase in WC results from the complex interplay between ecosystem and hydrological processes, influenced by various factors such as ecosystem type, surface characteristics, climate conditions, and human activities [11].However, differences exist in understanding the a ribution of WC variations in this region.Studies suggest the rise in WC can be a ributed to factors such as precipitation [3], potential evapotranspiration [12], the combined effect of increased precipitation and vegetation growth [13,14], or the combined effect of increased precipitation and decreased potential evapotranspiration [10].Climate, vegetation, and other factors, such as slope and altitude, also play a significant role in shaping the spatial distribution of WC [15].However, comprehensive investigations into the spatial responses of WC services in the TRHR under the synergistic influence of environmental elements are lacking.Assessing the changes in WC services and understanding the driving factors behind temporal and spatial variations are crucial.Moreover, further exploration is necessary for quantitatively examining of the interplay among distinct influencing factors [16].
Currently, the main methods used to evaluate WC include the water storage capacity [5] and the water balance methods [17].The water storage capacity method is primarily used to assess the scale of water storage capacity in sample plots, considering the contributions of vegetation, dead branches, and soil in intercepting precipitation [18].By contrast, the assessment of large-scale WC is typically accomplished through remote sensing inversion and ecological model simulation based on water balance [19].Remote sensing products provide information on WC at larger time and spatial scales but may not accurately capture local parameters such as soil water storage capacity [20].The Integrated Valuation of Ecosystem Services and Trade-offs (InVEST) model, based on the water balance, is widely used for long-term WC simulation due to its relatively straightforward data preparation and its ability to capture the physical mechanisms of soil-vegetationatmosphere interactions in terrestrial ecosystems [21].Previous research has highlighted that neglecting the influence of interannual changes in vegetation may result in inaccurate simulations of hydrological models [22,23].However, most studies that employ the In-VEST model to assess WC have overlooked the role of interannual variation in vegetation, potentially leading to inaccuracies in estimating actual evapotranspiration [24].Therefore, adequately considering vegetation dynamics is crucial to accurately representing the WC response.
This study aims to simulate and analyze the temporal and spatial characteristics of WC in the TRHR over the past 30 years.Correlation analysis and geographical detectors are used to investigate the combined impact of climate, vegetation, and other driving factors on WC variations.A key aspect of this study is improving the InVEST water yield model by integrating long-term vegetation dynamics and ground-based meteorological observations, thus highlighting the impact of interannual vegetation variation.Specifically, considering its importance as a parameter controlling land-atmosphere interactions and water cycling, a regional correction of actual evapotranspiration on alpine grasslands was conducted [25].The differences in the interaction between influencing factors on WC in different eco-geographical regions were also emphasized.Overall, this research deepens the understanding of land-atmosphere interactions in the TRHR, provides valuable insights for maintaining ecosystem stability in alpine areas and offers scientific support for ecosystem management and decision making processes.
Research Area
The TRHR (31°39′-36°12′ N, 89°45′-102°23′ E) is located in the northwestern part of Qinghai Province and is renowned as the birthplace of the Yellow River, Yang e River and Lancang River (Figure 1a).This region has an average elevation of over 4400 m (Figure 1a) and is characterized by predominantly low temperatures.A west-to-east elevation trend is exhibited in this region, with altitudes ranging from 2300 to 6600 m.Slopes in the area range from 0 to 26 degrees, with steeper slopes (>10 degrees) concentrated in the southeastern and central-southern parts of the region (Figure 1e).The annual average temperature fluctuates between -8 °C and 17 °C, leading to widespread permafrost in the TRHR.Precipitation in the area is typically concentrated between June and September, with a gradient of decreasing rainfall from southeast to northwest, ranging from over 840 mm to less than 160 mm (Figure 1b).The region also experiences high potential evapotranspiration, with a multi-year average of 600 mm and a range between 400 mm and 1000 mm (Figure 1c).The ecosystem in the TRHR is relatively simple in structure and fragile in terms of its environmental conditions, making it highly vulnerable to the direct impacts of climate change.These unique climatic characteristics in the region have resulted in the formation of distinct eco-geographical regions (Figure 1f, where I and II represent subcold and temperate, and B, C, and D represent semi-humid, semi-arid, and arid zones, respectively) [26].The dominant ecosystem type in the TRHR is alpine grassland, which covers 71.18% of the total area (Figure 1a).Thus, the region has extensive coverage of alpine shrub meadows, alpine grasslands, alpine meadow steppes, and alpine meadows (Figure 1f), collectively forming the alpine grassland.Additionally, the land use distribution in the TRHR includes farmland, woodland, water bodies, built-up land, and unused land, accounting for 0.66%, 4.26%, 5.63%, 0.1%, and 18.17% of the total land area in 2020, respectively.Unused land is located primarily in the western part of the study area, while forest land is in the central-southern and eastern regions.The leaf area index (LAI) in the TRHR exhibits a spatial gradient decreasing from southeast to northwest.The annual average LAI value ranges from 0 to 2 m 2 m −2 .The eastern region exhibits an average LAI value exceeding 1 m 2 m −2 , while the western region has an average LAI value below 0.2 m 2 m −2 (Figure 1d).
Data Source and Preprocessing
Monthly observational data were gathered from 20 weather stations located in the TRHR.These data cover the period from 1991 to 2020 and include parameters such as temperature, precipitation, wind speed, and relative humidity.A thin plate spline function was employed to spatially interpolate these site-based observational data, utilizing elevation as a covariate, to obtain regional-scale data across the entire region.Potential evapotranspiration data were derived from the 1 km monthly potential evapotranspiration dataset in China provided by the National Tibetan Plateau Data Center [27].Previous studies have demonstrated the effectiveness of this dataset in simulating the WC in the Tibetan Plateau [16].
The vegetation dataset used in the current study included the LAI and the homogenized vegetation index (NDVI).The LAI datasets were utilized to calculate the annual vegetation evapotranspiration coefficient (Kc) of the region.In contrast, the NDVI datasets were employed to assess the explanatory power of vegetation factors regarding the spatial distribution of WC.The Global Mapping (GLOBMAP) LAI version 3 dataset was acquired to obtain the required LAI data [28].This dataset provided the necessary LAI data with a temporal resolution of 16 days before 2001 and 8 days after.The 1 km spatial resolution NDVI data were downloaded from Zenodo (h ps://zenodo.org/record/6295928,accessed on 1 September 2022).Monthly vegetation index data from 1991 to 2020 were generated using the maximum synthesis method.
Spatial data for the velocity coefficient were obtained based on empirical values associated with different land cover types to calculate water yield using the InVEST model [29].Root depths for different vegetation types were determined by referring to relevant data from the Food and Agriculture Organization (FAO).Woodland areas were assigned a root depth of 3 m, grassland areas were assigned a root depth of 1 m, and non-vegetated areas were not assigned a root depth.
The annual total water yields in the Qinghai Water Resources Bulletin were obtained to assess the accuracy of the model simulation.All the data used in this study are listed in Table 1.The WC for each grid was calculated using the water yield obtained from the InVEST model.This calculation considers various factors, including soil permeability, topography, and flow velocity coefficients, which were specific to different land-use types [29]: where WC (mm) is the WC for each land use type (j) in each grid (i).The influence of different land use types on surface runoff was considered by using the flow velocity coefficient (Velocity ).The topographic index (TI ), expressed as a dimensionless parameter, was used to assess the topographic characteristics of each grid (i).The Cosby model [30] was utilized to calculate the soil-saturated hydraulic conductivity (K ) (cm/d) in centimeters per day.Additionally, the water yield (Y ) was determined using the InVEST model and measured in millimeters (mm).
where D is the number of grids in the catchment area (dimensionless), Soil is the soil depth (mm), and P is the percentage slope (%) calculated using the suite of Hydrology Tools (Arc Hydro) in ArcGIS 10.5.
where sand and clay represent the content of sand and clay in the soil (%), respectively.Water yield in InVEST is defined as the amount of water that runs off the landscape.It calculates water yield by applying the water balance principle at the sub-watershed level.The conceptual diagram of the water balance model is illustrated in Figure 2. The water yield model is based on the Budyko curve and annual average precipitation.The annual water yield for each pixel was calculated as follows: where AET(x) is the annual actual evapotranspiration of grid x and P(x) is the annual precipitation of grid x.
In the water balance formula, the vegetation evapotranspiration is calculated based on the Budyko hypothesis proposed by Zhang et al. [32]: where ET (x) is the potential evapotranspiration of grid x.K (l ) represents the evapotranspiration coefficient of vegetation for land use type j and grid x.An estimation of Kc values for farmland and woodland was obtained using the relationship Kc = LAI/3 (Kc equal to 1 for LAI > 3).The Kc values for water bodies, built-up land, and unused land were set to 1.1, 0.3, and 0.5, respectively [31].Kc for grassland was estimated using the method in Section 2.3.2.w(x) is an empirical parameter proposed by Donohue et al. [33]: where Z is the Zhang coefficient, which corresponds to the number of precipitation occurrences every year.When Z = 3.5, the error between the simulated water yield and actual runoff is the smallest (7.9%) [34].AWC(x) represents the available water content of soil (mm), which is used to determine the total water stored and provided by soil for plant growth.AWC(x) is calculated as follows: AWC(x) = Min(root restricting layer depth, root depth) × PAWC (7) where AWC(x) is estimated as the product of the plant available water capacity (PAWC) and the minimum of root-restricting layer depth and vegetation rooting depth.The calculation of AWC is performed using the following formula for PAWC: PAWC = 54.509-0.132sand%− 0.003(sand%) − 0.055silt% − 0.006(silt%) − 0.738clay% + 0.007 (clay%) − 2.688OM% + 0.501(OM%) Actual evapotranspiration (AET) for other land use types (water bodies, built-up land, unused land) is calculated directly with ET(x): where ET represents potential evapotranspiration and K (l ) represents the evapotranspiration coefficient for the land use type without vegetation cover.The value of K (l ) is taken in accordance with a fixed value (x) [35]: the value of the water body is 1.1, the value of built-up land is 0.3, and the value of unused land is 0.5.
Calibration Method of Evapotranspiration Coefficient in Grassland Based on Interannual Variation in Vegetation
Alpine grassland covers more than 70% of the total area of the TRHR, making it the dominant terrestrial ecosystem type.Therefore, the main objective of this study is to calibrate the Kc coefficient specifically for alpine grasslands.Capturing the dynamic evapotranspiration pa erns of alpine grasslands within the InVEST model accurately is crucial.The growth process of alpine grassland can be categorized into different stages, which exhibit interannual variability influenced by climatic changes.
Monthly grid data of LAI were employed to estimate the Kc coefficient for each grid.Subsequently, the monthly average Kc coefficient was calculated for the entire study area by averaging the grid-based Kc coefficients.Finally, the 12 monthly regional average Kc coefficients for each year were combined to generate the annual value of Kc coefficients for grassland, following the approach recommended by the InVEST model.The specific procedure is as follows.
First, the monthly average Kc coefficients of grassland were computed based on the different growth stages of vegetation.Previous research and actual meteorological conditions identify three distinct stages for the growth season of alpine grasslands: the initial growth season (May), the middle growth season (June, July, and August), and the late growth season (September and October) [36].Following the FAO-56 guidelines, the recommended values for Kc during the initial, middle, late, and non-growing stages of grassland are 0.4, 1.05, 0.85, and 0.4, respectively [35].However, these coefficients can be adjusted to reflect the actual climatic conditions of the study area.The Kc coefficients for the middle and late growth stages were modified.Notably, the Kc coefficients for the early growth and non-growth stages remained unadjusted.The adjustment equation employed in this study is as follows: where RH is the mean value for the daily minimum relative humidity (%), u is the wind speed at 2 m height (m/s), and K is the recommended Kc value in FAO-56.h is the mean plant height (m), fi ed by wind speed, relative humidity, and LAI [37]: Second, the annual vegetation evapotranspiration coefficients (K _ ) was determined using the calculation formula recommended by the InVEST model.Each month's regional average Kc coefficients was calculated by averaging the grid-based monthly Kc coefficients across the entire study area.Then, each month's regional average potential evapotranspiration was computed.Finally, the annual Kc coefficient was derived by using the following formula: where K represents an average Kc coefficient of month m (1-12) and ET is the corresponding ETo.
The differences between the observed yearly total water yield (TWY) and the simulated TWY were evaluated to investigate the accuracy of the simulations by the improved InVEST model using the regionally calibrated Kc coefficient, including the coefficient of determination (R 2 ) and the root mean square error (RMSE).The differences between the observed and simulated TWY obtained through three different methods of calculating Kc are further compared to illustrate the effectiveness of the improved model (Figure 3).The approach proposed in Section 2.3.2 was utilized in the first method to estimate the Kc coefficients (Figure 3a).In the second method, the Kc coefficient was determined using the relationship Kc = LAI/3 (where Kc equals 1 when LAI > 3) (Figure 3b).This method is mentioned in the InVEST model manual.The calculation process involves synthesizing the monthly values of LAI using the maximum value synthesis method, followed by averaging the values for each month across each year to derive the annual average LAI.The Kc value for each grid is computed based on this relationship Kc = LAI/3, and the overall Kc value required by the model operation is obtained by averaging across the entire region.The third method uses a fixed Kc coefficient of 0.65, corresponding to the recommended Kc value for grassland in the InVEST model (Figure 3c).The observed TWY data are obtained from the Qinghai Water Resources Bulletin, covering 1994 to 2020.
Comparatively, the simulation results of TWY based on the improved model display better performance, with an increase in the R 2 ranging from 0.51% to 14.6% and a reduction in RMSE ranging from 0.478 × 10 10 m 3 to 2.193 × 10 10 m 3 .The improved model enhances the performance of the InVEST water yield model by incorporating information on interannual vegetation changes.
Geographical Detector
The geographical detector method is a powerful tool for analyzing the driving forces by detecting the heterogeneity of the spatial stratification of elements [38].Factor detectors and interaction detectors were utilized to examine the driving factors of WC.This study focused on two important aspects: identifying the factors that significantly influence WC and investigating the interactions between these factors.
The factor detector is employed to detect the spatial differentiation of dependent variables and assess the explanatory power of independent variables for WC.The q-value quantifies the strength of this explanatory power.The specific calculation formula is as follows: where h is the stratification of the dependent or independent variable, with values ranging from 1 to L. N and N are the number of cells within stratum h and the whole region, respectively.σ and σ are the variances of the values of the dependent variable in the stratum and the whole region, respectively.SSW is the sum of the variances within the stratum, and SST is the total variance of the whole region.q indicates the explanatory power of independent variables for dependent variables, with a value range between 0 and 1.A higher value of q indicates a stronger explanatory power of independent variables for dependent variables.The main influencing factors of WC can be determined by examining the value of q.The q value can be tested for significance using the geographic detector method because it follows a non-central distribution after a simple transformation.This study assesses whether the two factors have an interaction.It determines the strength and characteristics of the interaction by comparing the values obtained from individual factors and the interaction values obtained from the geographic detector.For example, if q(X1 ∩ X2) < Min(q(X1), q(X2)), it indicates that the interaction is characterized by nonlinear weakening.If Min(q(X1), q(X2)) < q(X1 ∩ X2) < Max(q(X1), q(X2)), the interaction shows the characteristics of one-factor nonlinearity weakening.If q(X1 ∩ X2) > Max(q(X1), q(X2)), the interaction displays the characteristics of two-factor enhancement.If q(X1 ∩ X2) = q(X1) + q(X2), it signifies that the interaction is characterized by two-factor independence.Finally, if q(X1 ∩ X2) > q(X1) + q(X2) , it indicates the interaction is characterized by nonlinear enhancement.
The objective of this study is to examine the factors that influence the spatial heterogeneity of WC.These factors include climatic factors such as annual average precipitation and annual average potential evapotranspiration, topographic factors such as slope and digital elevation model, and vegetation factors such as annual average growing season LAI.Continuous variables need to be converted into classified data to meet the requirements of the geographic detectors.Therefore, the natural breakpoint method in ArcGIS 10.5 was utilized to divide altitude, slope, annual average precipitation, annual average potential evapotranspiration, and LAI into nine categories.The analysis process of the geographic detector using NDVI as the vegetation factor was repeated to ensure the reliability of the results.
Trend Changes and Correlation Analysis
Simple linear regression analysis was initially employed to examine the interannual variation trends within the TRHR at regional and grid scales.Subsequently, Pearson correlation analysis was used to investigate the impact of individual environmental factors on the interannual variability in WC.
Total Research Approach
The specific research schemes are outlined as follows: (1) first, the monthly Kc coefficients are calculated using the monthly LAI grid data to correct for the grassland's middle and late growth stages.The method recommended in the InVEST model manual is used to synthesize the monthly Kc coefficients into an annual value.Other parameter values necessary for model execution can be found in Sections 2.3.1 and 2.3.2.(2) Next, the InVEST model is utilized to simulate the water yield of the TRHR from 1991 to 2020.The TWYs obtained from the model output are compared with the recorded TWYs in the water resources bulletin to verify the accuracy of the simulation results.Once the verification is complete, we estimate the WC based on the grid-scale water yield results. (3) The influence of vegetation and climate on the temporal changes in WC is analyzed using correlation analysis.Additionally, the geographic detector method is employed to explore the effects of climate, vegetation, and topography on the spatial differentiation of WC and the interactions among these factors (Figure 4).Implementation flow chart of this study (Formula a is derived from Zhao, et al. [37].Formula b is derived from Allen, et al. [35].Formula c is derived from Sharp, et al. [31]).
Characteristics of Interannual Variation in Water Conservation
After regional correction, the multi-year average Kc coefficient for the grassland is 0.72, which is higher than the recommended value of 0.65 suggested by the InVEST model (Figure 5).Overall, the Kc coefficient showed a slight downward trend over the years, which was not very pronounced.The coefficient of variation of Kc from 1990 to 2000 was 0.018, indicating significant fluctuation of the Kc coefficient of grassland during that period.By contrast, the overall coefficient of variation from 1990 to 2020 decreased to 0.013, indicating decreased fluctuation in the 2000s and 2010s.Among the years analyzed, the evapotranspiration coefficient of grassland reached its maximum value of 0.75 in 1997 and its minimum value of 0.71 in 1993.WC in the TRHR exhibited a significant increase (p < 0.05) from 1991 to 2020, with an annual growth rate of 1.4 mm (Figure 6).A series of ecological projects have been implemented, and the ecological environment in the TRHR has improved since 2000.Accordingly, WC has changed from a weak increase to a strong increase since 2000.Specifically, WC demonstrated a nonsignificant upward trend (1.16 mm/yr) from 1991 to 1999.However, a significant increase was observed in WC, with a growth rate of 1.
Spatial and Temporal Distributions of Water Conservation
The spatial distribution of WC in the TRHR shows significant heterogeneity (Figure 7a).The areas with high WC were found mainly in the southeast of the Yellow River and the source area of the Lancang River, specifically in Banma County, Nangqian County, and Jiuzhi County.The average value of WC over the years was 77.82 mm.Among the sub-regions, the mean value of WC was highest in the HIB1 region (125.32mm), followed by the HIIC2 (112.97 mm) and the HIC1 regions (71.97 mm).The HIB1 region is characterized by lower altitude and a more humid climate favorable conditions for WC.Moreover, this area boasts relatively high vegetation coverage, pivotal in regulating runoff and increasing water yield, making it a promising region for WC.WC exhibited an increasing trend in 96.12% of the region during the study period, and among these trends, 78.17% were statistically significant (p < 0.05) (Figure 7b).These findings highlight substantial WC improvements across most TRHR over the past three decades.Notably, the southeastern region of the TRHR exhibited an even more pronounced trend, demonstrating a rate exceeding 2 mm/yr.However, approximately 3.87% of the region still faces challenges maintaining WC efforts.Moreover, approximately 1.72% of the total area of the TRHR experienced a significant decrease in WC throughout the study period, with a rate of -2.77 mm/yr (p < 0.05).
Drivers of Interannual Change
The TRHR experienced an increase in average annual precipitation during the period from 1991 to 2020, with a rate of 3.6 mm/yr (p < 0.5) (Figure 8a).Before 2000, the increase in precipitation was not statistically significant, with an annual increase of only 3.71 mm/yr.However, the rate of precipitation increase has noticeably accelerated since 2000, with an annual increase of 3.80 mm/yr.By contrast, potential evapotranspiration demonstrated no significant change throughout the study period, with a rate of 0.48 mm/yr (Figure 8b).The rate of increase in potential evapotranspiration slowed after 2000.Additionally, the average annual LAI from 1991 to 2020 exhibited an upward trend, indicating improved vegetation growth (Figure 8c).The TRHR generally had a low baseline LAI, but a fluctuating upward trend was observed from 1991 to 2020, with a growth rate of 0.0008 m 2 m −2 /yr (p < 0.05).In particular, the annual fluctuation in LAI was more pronounced during 2000-2020 than in the 1991-2000 period.Table 2 shows a significant positive correlation between precipitation and WC.The increase in precipitation in the study area over the past 30 years has played an important role in improving WC.However, no significant correlation between potential evapotranspiration and WC was observed throughout the study.Before 2000, potential evapotranspiration and WC had a positive correlation, but this relationship was not statistically significant.However, this correlation changed to a significant negative correlation after 2000.The increase in potential evapotranspiration indicates that meteorological factors such as temperature, radiation, and wind speed have a stronger ability to demand atmospheric water, ultimately leading to an increase in actual evapotranspiration and a decrease in WC.The increase in LAI also promotes greater WC.After 2000, the influence of LAI on WC increased.
Drivers of Spatial Heterogeneity
The geographical detector method was employed to identify the primary factors influencing the spatial distribution of the annual average WC in the region, given the significant spatial variability of WC within the TRHR.Analysis of the q values of each factor indicated that annual average precipitation had the highest explanatory power for WC (0.73), followed by LAI (0.582), annual average potential evapotranspiration (0.207), digital elevation model (DEM) (0.221), and slope (0.109), as determined.These factors were statistically significant (p< 0.01) (Table 3).The annual average precipitation accounted 73% of the spatial distribution of WC, which was the primary driving factor in the region.LAI demonstrated moderate explanatory power for WC, while potential evapotranspiration and altitude had weak influences.The slope had the weakest influence on the spatial heterogeneity of WC, contributing only 10.9%.A multi-year average NDVI from 1991 to 2020 was used as a vegetation factor, yielding consistent findings with those obtained using LAI to ensure the reliability of the results.An interaction detector was used to analyze the interaction between two factors in the TRHR and various ecological regions to explore further the impact of different factors on the spatial variation in WC (Figure 9).The NDVI data were applied to replicate the entire interaction detection process and validate the reliability of the interaction detection findings.The entire interaction detection process was also replicated using NDVI data to validate the reliability of the findings.The interaction between two factors provided a better explanation for the spatial differentiation of WC compared to a single factor.In particular, the interaction between precipitation and other factors played a key role in the spatial differentiation pa ern of WC.The interaction between precipitation and potential evapotranspiration had the strongest influence on WC in the north-central and western regions (HIC1) (explaining at least 63.1% of the spatial distribution of WC), where the background value of LAI was relatively low.Precipitation and vegetation together accounted for 72.8% and 60% of the distribution of WC in regions HIIC2 and HIB1, respectively.
Interannual Variation in Water Conservation
Vegetation dynamics affect the regional WC through their influence on rainfall-runoff capacity [39] and evapotranspiration [40].Thus, considering the impact of vegetation dynamics is crucial in quantifying WC services.Some hydrological models, including the Soil and Water Assessment Tool (SWAT), Water and Energy transfer Processes in Large river basins (WEP-L), and Water and Energy Budget Distributed Hydrological Model (WEB-DHM), also consider the effects of vegetation growth on precipitation distribution, soil moisture, and groundwater recharge [41][42][43].This study focused specifically on the influence of vegetation on WC and improved the InVEST model to reflect interannual variation in vegetation and enhance the accuracy of the simulation.
The grassland growth and actual evapotranspiration in the Tibetan Plateau exhibit evident stage and interannual variations [44].An empirical coefficient called the Kc coefficient is commonly used to determine vegetation water requirements.This coefficient is typically calculated by dividing actual evapotranspiration (AET) by reference evapotranspiration (ETo).Previous studies have employed this ratio to estimate the annual Kc coefficient.However, obtaining AET data in advance is necessary for the corresponding research, which limits the generalizability of this approach [45].The Kc coefficient for a specific region can be determined by establishing an empirical relationship between vegetation indexes, such as LAI and NDVI, and the corresponding Kc coefficients.However, collecting and obtaining the observed Kc coefficients beforehand to establish this relationship is crucial [46].This study first categorized grassland Kc coefficients into three growth stages to be er understand the variations in grassland evapotranspiration in this research.Then, the Kc coefficients recommended by FAO-56 for each specific stage of grassland growth were utilized.The Kc coefficients for the middle and late growth stages were also refined by incorporating local climate data.These adjusted coefficients were combined to derive annual-scale grassland Kc coefficients.The methodology developed in this study enables rapid and accurate estimation of Kc coefficients across a wide range of temporal and spatial scales.However, when estimating the Kc coefficient, the absence of extensive plant height data requires estimating plant height using LAI and an empirical formula.However, this approach is specific to alpine grasslands because of the regional nature of the empirical formula employed and may not be apply to other regions.If simulated or observed plant height data become available for a broader range of times and locations, the method proposed in the current research could potentially be applied to grassland areas on a global scale.
Factors Affecting Temporal and Spatial Differentiation of Water Conservation
In the past 30 years, the WC in the TRHR has increased significantly, which is consistent with previous studies [8,13,47].WC is a complex process influenced by various factors.Climatic factors, including precipitation, temperature, and humidity directly impact on the hydrological cycle and the ability to conserve water [48,49].The relationship between interannual variations in WC and climatic factors was examined in the TRHR.The results indicated a significant positive correlation between precipitation and WC over the past 30 years, in line with previous research [3,13,16].Precipitation emerged as the primary factor influencing the spatial variation in WC in the TRHR, followed by vegetation (measured by NDVI or LAI) consistent with previous studies in the region [14,16].The evapotranspiration capacity of the terrestrial ecosystem, indicated by potential evapotranspiration, has a great effect on WC in the TRHR, along with vegetation.The TRHR displays significant spatial variations in WC, demonstrating high WC values found in areas with abundant precipitation and well-developed vegetation.Topography regulates WC by impacting vegetation structure, soil properties, and water and heat conditions [50].Nevertheless, slope has a weaker effect on WC than other influencing factors.
The implementation of ecological projects in the TRHR also contributes to enhancing WC.Grassland degradation has been observed in this region since the 1980s, reducing effectiveness in intercepting runoff and storing water [51].Since 2000, the government has initiated several ecological projects to address this issue, including the grassland ecological reward mechanism, the grazing withdrawal project, and the first and second stages of ecological conservation and restoration projects [52,53].These projects transformed approximately 1.98 × 10 4 km 2 of unused land into grassland in the TRHR by 2020.Furthermore, a remarkable increase in vegetation coverage has been observed in the entire region over the past 30 years, stabilizing surface temperature and enhancing soil moisture conditions [54].Implementing ecological projects has effectively improved WC in the TRHR, strengthening its ecological security barrier function [55].
A single factor alone is often insufficient to comprehend the entire ecosystem's complexity fully.The complexity of geographical processes typically involves the interplay of multiple factors.While the interaction between precipitation and vegetation enhances their explanatory power for WC, the degree of interaction may vary across eco-geographical regions.In the TRHR, HIC1 experiences relatively low precipitation and has a low background value of vegetation.Therefore, the interaction between precipitation and potential evapotranspiration effectively explains the distribution of WC in this area.By contrast, HIB1 and HIIC2 are relatively moist and have abundant vegetation.Vegetation plays a crucial role in regulating the distribution of precipitation and ensuring WC through transpiration and interception [56].Thus, the interaction between vegetation and precipitation in these areas has a high explanatory capacity for WC.
Adaptation to Future Climate Change
The Tibetan Plateau is predicted to experience increased precipitation in the future, which will positively impact WC in most areas of the TRHR, such as the Yang e River and Lancang River sources [3,57].However, certain areas, particularly some counties at the source of the Yellow River, face a decline in WC because of reduced precipitation and increased potential evapotranspiration resulting from a weakening summer monsoon.This increase in rainfall is accompanied by higher rainfall erosivity, leading to soil erosion and uncertainties in WC [58].The Tibetan Plateau is recognized as one of the most sensitive regions to global climate change, with its warming rate twice exceeding the global average [59].The TRHR, located in the hinterland of the Tibetan Plateau, has observed even more significant warming changes.The increasing frequency of extreme heat events in the future will have a profound impact on the terrestrial ecosystem balance in this region [60].Rising air temperatures lead to a decrease in soil moisture [61] and an increase in the saturated vapor pressure deficit [62], thereby increasing the vulnerability of WC services in this region.Climate change is anticipated to intensify vegetation expansion (greening) in the TRHR [63].The development of grasslands is closely linked to WC services [64].This large-scale vegetation greening will cause changes in evapotranspiration and precipitation pa erns [65].Permafrost in the TRHR has also been declining since 1980, while the thickness of the active layer has been increasing [51,66].Hence, future research should monitor the impact intensity of vegetation greening and permafrost degradation on WC.Changes in WC also affect the terrestrial ecosystem in alpine regions [67].Therefore, understanding WC services and their interactions with various environmental factors is crucial to developing effective measures to adapt to climate change.
Limitations and Future Work
WC requires a comprehensive understanding of the ecohydrological processes within a basin, including factors such as precipitation interception and evaporation.It also involves exploring the interaction between multiple hydrological processes and the ecosystem, such as runoff timing, water quality, flood protection, and temperature regulation.Runoff timing can indicate the capacity for WC.If the surface and groundwater systems have a strong water storage capacity, runoff from rainfall or snowmelt will slowly enter rivers or lakes, delaying runoff formation.In such cases, the runoff time will be relatively late [68].WC processes reduce soil erosion and prevent pollutants from entering water bodies.For example, vegetation slows down the flow of rainfall, allowing more time for water to permeate the soil, which reduces surface runoff and helps decrease the risk of soil erosion and the introduction of pollutants into water bodies [69].Additionally, a basin with a good WC capacity mitigates the likelihood of natural disasters such as floods and droughts by intercepting rainfall and storing soil water [70,71].These aspects of WC function will be considered to further enhance the understanding of WC in future research.
Research methods employed in this study have several limitations.First, the WC simulation does not differentiate between the Kc coefficients at grid scales.Instead, a regional average Kc coefficient is utilized, which may introduce certain uncertainties to the results.Second, the InVEST model does not account for the influence of runoff in the calculation of regional WC, which is another limitation [31,72].the model also fails to consider the impact of freezing and thawing on WC [14].Therefore, future research should focus on developing more precise methods for simulating vegetation evapotranspiration coefficients at the grid scale and exploring ways to optimize WC models in plateau regions.
The environment in the TRHR is highly complex, and this study has not delved deeply into the various factors contributing to changes in WC.Human activities such as overgrazing, expansion of agricultural land, road construction projects, and ecological project initiatives alter the surface, directly or indirectly affecting the WC services of the region [73][74][75].Future studies should focus on quantitatively analyzing the positive and negative impacts of human activities on WC to gain a more comprehensive understanding.Differentiating between the effects of human activities and natural environmental factors on WC is crucial.
Conclusions
In this study, we improved the InVEST model by integrating the long-term series of LAI data to capture the interannual variation in vegetation.The temporal and spatial distribution characteristics of WC in the TRHR were assessed quantitatively from 1991 to 2020.Trend and correlation analyses were used to examine the temporal variation trend and driving factors of WC.We also employed the geographical detector method to explore the influencing factors and their interactions in the spatial distribution of WC.The main findings are as follows: (1) WC in the TRHR exhibits a spatial pa ern with high values in the south and low values in the north.WC significantly increased (1.4 mm/yr, p < 0.05) from 1991 to 2020.Compared with 1991-1999, the annual average value of WC in 2000-2020 increased by 28.17%.Over the past 30 years, WC improved in 78.17% of the regions (p < 0.5).(2) The increase in precipitation over the past three decades has had a significant positive impact on WC (R = 0.97, p < 0.01).However, the growth in potential evapotranspiration has had a significant inhibitory effect on WC since 2000 (R = -0.5, p < 0.05).(3) The spatial variation in WC is influenced primarily by precipitation, followed by vegetation and potential evapotranspiration.The interaction among these factors has a stronger explanatory power for WC than individual factors alone.The interaction between precipitation and other influencing factors demonstrates the greatest explanatory power.The combined influence of precipitation and vegetation accounts for approximately 79.1% of the WC distribution across the study area.However, the dominant interaction factors for WC vary in different eco-geographical regions.In the north-central and western regions (HIC1) with low vegetation, the interaction between annual precipitation and potential evapotranspiration explains 65% of the variation in WC, making it the dominant interaction factor in the region.In the eastern and central-southern areas (HIB1 and HIIC2), the interaction between annual precipitation and vegetation exhibits the strongest influence.
The method proposed provides valuable insights into simulating WC.Incorporating processes such as permafrost degradation is crucial for the accurate assessment of WC in plateau regions in future work.
Data Availability Statement:
The data presented in this study are available upon request from the corresponding author.
Figure 2 .
Figure 2. Conceptual diagram of the water balance model used in the InVEST water yield model [31].Only parameters shown in color are included, and parameters shown in grey are ignored.
Figure 3 .
Figure 3. Evaluation of simulated total water yield (TWY) versus observed TWY (TWY was simulated by the InVEST model using (a) calibration Kc values with interannual variation in vegetation, (b) Kc values obtained using the relationship Kc = LAI/3, or (c) Kc coefficients estimated at Kc = 0.65.Shadow represents a 95% confidence interval).
Figure 4 .
Figure 4. Implementation flow chart of this study (Formula a is derived from Zhao, et al.[37].Formula b is derived from Allen, et al.[35].Formula c is derived from Sharp, et al.[31]).
Figure 5 .
Figure 5. Regional corrected Kc coefficient of grassland in the Three-Rivers Headwater Region from 1990 to 2020.
74 mm/yr (p < 0.05) from 2000 to 2020.Over the past 30 years, the average annual WC in the TRHR was 77.82 mm.During the 1990s, the average annual WC in this region was 65 mm.The annual average WC has increased to 83.31 mm since 2000, representing an overall growth of 28.17%.The change trend of WC is similar to that of precipitation, with a change rate of 3.71 mm/yr before 2000 and 3.8 mm/yr from 2000 to 2020.The lowest value for WC occurred in 2015 (46.31 mm), while the highest was observed in 2019 (114.76 mm).The average annual total WC over the years was 2.79 × 10 10 m 3 .The overall trend of total WC aligns with the trend of WC over time.
Figure 6 .
Figure 6.Temporal variation in water conservation in the Three-Rivers Headwater Region from 1991 to 2020 (relative to the 1991-2020 anomaly).
Figure 7 .
Figure 7.The spatial distribution (a) and trend (b) of water conservation in the Three-Rivers Headwater Region from 1991 to 2020 (the inset panels in the bo om right of (a) display the water conservation values in different ecosystem zones using a violin diagram.The inset panels in the bo om right of (b) indicate the significance level (p < 0.05).The percentages of increasing (I) and decreasing (D) trends (percentage of significant trends in parentheses) are shown at the bo om of (b)).
Figure 8 .
Figure 8. Interannual variability of climatic and vegetation elements from 1991 to 2000 ((a) Average annual precipitation.(b) Average annual potential evapotranspiration.(c) Annual average leaf area index).
Figure 9 .
Figure 9. Explanatory power of interactive influencing factors on water conservation in the Three-Rivers Headwater Region and its sub-regions ((a-d): LAI represents vegetation factor, (e-g): NDVI represents vegetation factor).
Author Contributions:
Conceptualization, data curation, formal analysis, methodology, writingoriginal draft, Y.P.; methodology, conceptualization, funding acquisition, supervision, writing-review and editing, Y.Y.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by the Second Tibetan Plateau Scientific Expedition Program (2019QZKK0403).Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable.
Table 2 .
Correlation between climate, vegetation, and water conservation.
Table 3 .
The q values of influencing water conservation factors in the Three-Rivers Headwater Region.
|
v3-fos-license
|
2023-02-17T14:47:42.650Z
|
2017-06-05T00:00:00.000
|
256935029
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-017-03044-w.pdf",
"pdf_hash": "577ae1f6db80817aaca50f80fde7c488ca37ef84",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46544",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "577ae1f6db80817aaca50f80fde7c488ca37ef84",
"year": 2017
}
|
pes2o/s2orc
|
Soil biochemical responses to nitrogen addition in a secondary evergreen broad-leaved forest ecosystem
In order to investigate the effects of N deposition on soil biochemistry in secondary forests, one N addition experiment was conducted in a secondary evergreen broad-leaved forest in the western edge of Sichuan Basin, with the highest level of background N deposition (about 95 kg N ha−1 yr−1) in China. Three N treatment levels (+0, +50, +150 kg N ha−1 yr−1) were monthly added to soil surface in this forest beginning in April 2013. Soil biochemistry and root biomass of the 0–10 cm soil horizon were measured from May 2014 to April 2015. Soil respiration was measured for two years (September 2013 to August 2015). It was showed that N additions were correlated to significantly lower soil pH, microbial biomass C (MBC) concentration, MBC/microbial biomass N (MBN) ratio, root biomass, and soil respiration rate, and significantly higher concentrations of ammonium (NH4+) and nitrate (NO3−). These results indicate that N additions had a significant effect on the size of soil microbial community. In addition, soil C storage may potentially increase due to the dropped soil C release under N addition.
Currently, forest N deposition research is primarily concentrated in temperate and boreal forests. Relatively few such studies have been conducted in tropical and subtropical forests, particularly secondary forests. Secondary forests are plant communities that naturally regenerate after complete anthropogenic forest clearance. In China, due to over-exploitation and excessive harvest in the past, primary forests have gradually disappeared and have been replaced by large areas of secondary forests 32 . These represent an important component of forest resources in China 33 . Many studies indicated that restored secondary forests played an important role in terrestrial ecosystems' net C sinks over the past few decades [34][35][36] . It was reported that, on a global scale, secondary forests contributed an estimated 0.35-0.6 Gt C yr −1 to terrestrial C sinks in the 1990s 35 , while N deposition contributes about 0.13 Gt C yr −1 to the total secondary forest C sinks 36 . Although the stimulatory effect of N deposition on the growth of secondary forests is the main explanation for the increase in forest C sequestration in recent years 36 , many studies have indicated that the effects of N deposition on various forests are not consistent, and eventually, some forests may transition from N-limited to N-saturated ecosystems 37 .
The large ecotone in southwestern China, on the western edge of the Sichuan Basin has a cloudy, wet climate. The average wet N deposition in this region from 2008 to 2010 was about 95 kg N ha −1 yr −1 , which is considerably higher than the corresponding mean estimates for other areas in China, and in the United States and Western Europe [38][39][40] . Several possible reasons can be used to explain the high level of wet N deposition rate in this region. Firstly, the Sichuan Basin is an important industrial-agricultural economic region in southwest China. The reactive N creation in this region significantly increased because of the rapid development in economy. And reactive N released from the Sichuan Basin could be transported to the western edge of the Sichuan Basin by monsoon. Secondly, due to the rise in elevation, warm moist air from the Sichuan Basin is readily condensed into rain on the western edge of the Sichuan Basin. Therefore, there is abundant rainfall along the western edge of the Sichuan Basin (known as the "rainy zone of west China"). Thus, reactive N with rainfall may be deposited substantially in this special zone. It has been predicted that the largest increases in N deposition in the world will occur in this region over the next few decades 3 .
It is well believed that excessive N deposition induces a considerable burden on the functions and structures of ecosystems. Previously, we conducted simulated N deposition tests in Pleioblastus amarus and hybrid bamboo (Bambusa pervariabilis × Dendrocalamopsis daii) plantations and several other forest plantations in this region. The results from these studies indicated that N additions increased the rates of soil N mineralization and nitrification, the concentrations of NH 4 + and NO 3 − , microbial biomass carbon, fine root biomass, and mean annual soil respiration rate. On the other hand, litter decomposition rates and soil pH decreased, with mixed effects on soil enzyme activities 10,31,[41][42][43] . These results indicate that such a high level of N deposition may have severely affected many ecosystem properties and processes in this area. Notably, a large area of secondary forest was naturally generated from a sapling surviving from 1956 when the virgin forest in this region was destroyed. There are significant differences in stand age, soil development, soil fertility and ecosystem structure among primary forests, secondary forests and forest plantations. Thus, the responses of the three types of forests to N addition were potentially different. Secondary forest is the most important forest type in high N deposition region in China. What are the ecological consequences of continuous increasing N deposition for local secondary forest, in this region with high level of background N deposition is still not very clear. Therefore, an experimental N addition study was conducted in a subtropical secondary evergreen broad-leaved forest on the western edge of the Sichuan Basin, China. The aim of this study was to understand the effects of elevated N deposition on soil C status, nutrient availability, microbial properties and soil enzyme activities in a secondary forest ecosystem receiving high level of ambient N deposition.
Materials and Methods
Site description. The N addition experiment was conducted in a secondary evergreen broad-leaved forest in Wawushan Mountain National Forest Park, situated in Hongya, Sichuan Province, China (29°32′35″N, 103°15′41″E, altitude 1 600 m). The area experiences a monsoon-influenced, subtropical highland climate. The annual mean temperature is 10 °C, with lowest monthly value (−0.9 °C) in January and highest value (22.5 °C) in July. The annual rainfall and evapotranspiration are respectively 2 323 mm and 467 mm, and the annual average relative humidity is 85% to 90%. The soil is classified as a Lithic Dystrudepts (according to USDA Soil Taxonomy), and the nature of bedrock is granite. The average soil depth to bedrock is deeper than 1 m. Before the primary forest was destroyed in 1956, the site was representative of the mid-subtropical evergreen broad-leaved biome characterizing the study area, consisting of Castanopsis platyacantha and Schima sinensis. No further disturbance occurred after 1956, allowing the survivors to naturally recover into mature secondary evergreen broad-leaved forest. At the study site, the average plant density was 725 stems per hectare and the mean diameter at the breast height (DBH) was 23.5 cm. This forest is dominated by the tree species C. platyacantha and S. sinensis, DBH of them were 23.8 cm and 25.4 cm, respectively, and C. platyacantha is the most important constrictive species with the highest importance value of 56.91; the shrub species Ilex purpurea and Eurya japonica, and the sparsely distributed herb species Cyperus rotundus. Soil and litter chemistry before N treatment. In November 2012, thirty six litter samples were randomly taken from surface using a metal frame (50 cm × 50 cm) study site. At the same time, thirty six soil profiles (1 m) were dug. Each soil profile was divided into four layer (0-10 cm, 10-40 cm, 40-70 cm, 70-100 cm), since the thickness of the top organic soil is about 10 cm. Soil samples in each layer were collected using a small shovel for chemical analysis. For measuring soil bulk density (g cm −3 ), an undisturbed soil was collected using a 100 cm 3 cutting ring. Litter samples were dried to constant weight at 65 °C and weighed. Then each litter sample was ground using a Wiley mill with a 1-mm mesh screen. Soil samples were air-dried, ground and sieved through a 2 mm mesh for determining soil potential acidity, and sieved through a 0.25 mm mesh for measuring soil total organic carbon (TOC), total nitrogen (TN), total phosphorus (TP) and total potassium (TK). Total organic carbon of litter and soil was determined by the dichromate digestion method, and TN was measured by the Kjeldahl method. For determining TP and TK concentrations, litter and soil samples were digested by perchloric acid-sulfuric acid (HClO 4 -HSO 4 ) and sodium hydroxide (NaOH), respectively. Then TP and TK were determined using colorimetry an atomic absorption spectrophotometer (TAS-986, PGENERAL, Beijing, China), respectively. Soil potential acidity (pH-KCl) was determined by a glass electrode in 1M potassium chloride (KCl) extracts. The results were shown in Table 1.
Experimental design. Nine 20 m × 20 m plots were established within the study site in October 2012, at intervals of more than 20 m. Plots were divided into three treatments with three plots assigned to each: low nitrogen treatment (LN, +50 kg N ha −1 yr −1 ), high nitrogen treatment (HN, +150 kg N ha −1 yr −1 ), and ambient nitrogen/control (CK, +0 kg N ha −1 yr −1 ). Consequently, the cumulated doses received by CK, LN and HN plots was 95, 145, and 245 kg N ha −1 yr −1 , respectively. The LN and HN treatments simulate scenarios of nitrogen deposition increased by 50% and 150%. All nitrogen treatments plots were randomly selected. Beginning in April 2013, ammonium nitrate (NH 4 NO 3 ) solution was applied to the soil surface monthly and continued for the duration of the study (April 2013 to August 2015). In each month, the fertilizer was weighed, dissolved in 10 L of water, and applied to each plot using a sprayer. The control plots received an equivalent volume of water without fertilizer.
Litter fall and fine root biomass. Litter was collected monthly from ten 1 m × 1 m nylon mesh nets installed at randomly selected positions in each plot from May 2014 to April 2015. Because of winter snow cover, litter from January, February and March were collected as one composite sample at the end of March. Each litter was dried to constant weight at 65 °C and weighed.
The soil core method was used to determine root biomass. Root samples were taken in May, June, August, October and November of 2014 and April 2015. Three healthy mean trees (C. platyacantha) with DBH of about 23.8 cm were selected as target trees in each plot. For each sample, two soil cores at the top 10 cm were collected at 1 m from each designated sample tree using a soil auger 5 cm in diameter. Three trees were collected in each plot. Samples were placed into plastic bags and store at −4 °C. Roots were separated from soils by washing and sieving with a 0.25 mm sieve. Live roots were distinguished from dead roots by color and flexibility. Roots of C. platyacantha, shrubs and grasses were distinguished from each other by color and morphology. For determining root biomass, roots were dried at 65 °C for 48 h and weighed. Root biomass was expressed as the weight of roots per unit volume of soil (g m −3 ).
Soil biochemical characteristics measurements. Nine composite samples were obtained from the experimental site in May, September and November 2014, and April 2015. Each composite sample comprised five subsamples of the organic soil layer (about 0-10 cm) randomly collected from each plot with a soil auger. After removing the visible roots using tweezers, the soil samples were ground, sieved through a 2 mm mesh, and stored at 4 °C for analysis within 1 week. For measuring soil total organic carbon (TOC) and total nitrogen (TN), air-dried subsamples were ground and passed through a 0.25 mm sieve.
Total organic carbon was determined by the dichromate digestion method, while TN was measured using the Kjeldahl method. Soil ammonium nitrogen (NH 4 + ) and nitrate nitrogen (NO 3 − ) were extracted with a 2 M KCl solution and measured with colorimetry. Soil microbial biomass carbon (MBC) and soil microbial biomass nitrogen (MBN) were measured using the 24-h chloroform fumigation extraction technique using a total C/N analyzer (Shimadzu model TOC-VcPH +TNM-1, Kyoto, Japan). The MBC was calculated as the difference in extractable C between fumigated and unfumigated soils, divided by 0.45. The MBN also was calculated as the difference in extractable C between fumigated and unfumigated soils, but divided by 0.54. Soil available phosphorous (AP, include water soluble P and inorganic P) was extracted with mixed solution of 0.05 M HCl and 0.0125 M H 2 SO 4 and measured with colorimetry. Soil available potassium (AK) was extracted with 1 M ammonium acetate (CH 3 COONH 4 ) and measured using an atomic absorption spectrophotometer (TAS-986, PGENERAL, Beijing, China). Here, in order to comparing with other studies, soil pH was determined by a glass electrode in aqueous extracts (pH-H 2 O).
Urease activity was measured spectrophotometrically according to Sinsabaugh et al. 44 . Invertase activity was measured using the method described by Frankeberger Jr and Johanson 45 . Protease activity was determined by the method of Zhang 46 using sodium caseinate as substrate. The activity of acid phosphatase (AcPh) was measured following a publication of Saiya-Cork et al. 47 using 4-methyumbelliferyl (MUB) phosphate as substrate. The activity of nitrate reductase (NR) was measured spectrophotometrically according to Zhang 46 with little modification. Enzyme activity was calculated as the μmoles of substrate converted per hour per gram of dried soil.
Soil depth (cm)
Soil potential acidity (pH-KCl) Soil bulk density (g cm −3 ) C (g kg −1 ) Total N (g kg −1 ) Total P (g kg −1 ) Total K (g kg −1 ) C/N N/P , MBC, MBN and soil respiration. Relationships between soil properties were determined by using Pearson correlation coefficients. We determined the relationship of soil respiration rate to root biomass or soil MBC content using linear regression. Significant effects were determined at α = 0.05.
Results
Litterfall and root biomass. Litterfall mass at the experimental site displayed evident seasonal variation and peaked in May (Fig. 1A), while root biomass of Castanopsis platyacantha was relatively stable (Fig. 1B). The average sum of litterfall over the period May 2014 to April 2015 was 5.8 ± 0.4 kg m −2 yr −1 . Repeated measures ANOVA revealed that simulated N additions had no significant effect on litterfall. The average root biomass at control plots was 3.5 ± 0.2 kg m −3 , but was 19% and 29% lower under LN and HN treatments, respectively. The latter difference was significant (P = 0.048).
Soil nutrient availability. Repeated measures ANOVA indicated that there were significant seasonal patterns in soil TOC, NO 3 − and AP concentrations (P < 0.05, Fig. 2A,C,E); the addition of N significantly changed the seasonal pattern of AP (P = 0.026, Fig. 2E). Meanwhile, seasonal variations in the concentrations of TN, NH 4 + and AK were not significant (Fig. 2B,D,F). In general, there were no significant differences between N treatments and the control with respect to the concentrations of soil TOC, TN, AP and AK (Table 2). In the HN treatment, soil NO 3 − and NH 4 + concentrations were significantly higher (39% and 80%, respectively), relative to the control. High concentrations of N significantly increased the availability of inorganic N at the organic horizon. Correlation analysis indicated that soil TOC content was significantly positively correlated with both concentrations of NO 3 − and AP (Table 3).
Soil microbial properties and pH. The MBC concentration, ratio of MBC to MBN, and pH displayed significant temporal variation (P < 0.01), whereas the concentration of MBN was relatively stable (P = 0.130, Fig. 3). The mean concentrations of MBC in CK, LN and HN treatments were 5.40 ± 0.97, 2.62 ± 0.13 and 3.40 ± 0.33 g kg −1 , respectively, with the differences being significant ( Table 2). The effect of N addition on MBN concentration was not significant. The ratio of MBC to MBN was 51% and 41% lower under LN and HN, respectively, relative to the control (P < 0.05). The average pH at the organic horizon in control plots was 3.91 ± 0.01, while plots under N treatments exhibited a significantly (P = 0.002) lower pH. Correlation analysis showed that the concentration of MBC and the ratio of MBC to MBN were both negatively correlated with soil TN concentration, and that these correlations were highly significant. Similarly, highly significant negative correlations were observed between soil pH and the concentrations of soil TOC, NO 3 − and AP (Table 3). NR (14% lower in LN relative to the control). No significant correlations were detected with respect to activities of urease, invertase, protease and AcPh (Table 4).
Soil respiration. The mean soil respiration rate was 1.46 ± 0.19 μmol CO 2 m −2 s −1 (63 ± 8 mg C m −2 h −1 ) in the control plots (Fig. 5). Compared with the control, the average respiration rates in the HN treatment were 30% lower (P < 0.05). From May 2014 to April 2015, the cumulative CO 2 -C flux in the CK, LN and HN treatments were 553.5 ± 71.9, 520.4 ± 45.0 and 387.9 ± 15.7 g C m −2 yr −1 , respectively. Correlation analysis indicated that the soil respiration rate was significantly positively correlated with root biomass (r = 0.595, P = 0.009) and soil MBC (r = 0.630, P = 0.028, Fig. 6). Table 3. Results of Pearson correlation analysis of soil nutrient availability, microbial properties and pH in a secondary evergreen broad-leaved forest ecosystem (n = 27). *Correlation is significant at the 0.05 level (2-tailed). **Correlation is significant at the 0.01 level (2-tailed).
Discussion
In the current study, N addition was associated with significantly lower soil pH, MBC concentration, MBC/ MBN ratio, root biomass, and soil respiration rate. Conversely, significant positive correlations were detected with respect to the concentrations of TOC, TN, NH 4 + and NO 3 − . Several previous studies reported that N deposition can result in soil acidification in terrestrial ecosystems 15,28,29,48 , which was confirmed by our study. Nitrogen addition was associated with heightened soil NH 4 + concentration, which can be expected to accelerate nitrification 6,43 . This is because NH 4 + is the substrate of nitrification and the nitrification rate depends on soil NH 4 + concentration. Moreover, heightened soil NH 4 + buffered the fierce competition between nitrobacteria and plant uptake and heterotrophic microorganisms' immobilization, which thereby promoted nitrification. During the process of nitrification, when NH 4 + is transformed into NO 3 − , two moles of H + are released into the soil per mole of NH 4 + nitrified. Although ammonification process (i.e. organic matter degradation with production of NH 4 + ) consumes one H + , which to some extent buffers acidification, increased soil NH 4 + enhanced the substrate of nitrification that acidified soils. In addition, redundant soil NO 3 − could leach out of soils, leading to the loss of metal cations based on the charge balance in soil solutions, weakening their buffering against soil acidification 29 . Besides, increased NH 4 + concentration may promote plant N uptake, in which a mole of H + is released per mole of NH 4 + assimilated by plant roots. Therefore, N additions result in the accumulation of H + in the soil and make the metal cations easy to leach out of soils, leading to acidification. A highly significant negative correlation between soil pH and NO 3 − concentration (r = −0.720, P < 0.01) was observed, which suggests that a lower soil pH is associated with accelerated nitrification. However, other studies found that N additions had no significant influence on soil pH, and conjectured that enhanced production of certain metabolites by soil microorganisms could have had a neutralizing or buffering effect 30 .
Our data indicated that high levels of N addition considerably reduced fine root biomass of C. platyacantha. Based on the cost-benefit analysis that more resources will be allocated to aboveground biomass when soil resource availability is enriched, proportionately less C will be present in belowground biomass; this results in a decrease in root biomass under N additions 49,50 . Other potential causes of a decline in root biomass are soil acidification and aluminium (Al) toxicity 51 . The H + released in soil can rapidly react with Al in the soil mineral lattice, which will lead to a sharp increase in Al 3+ in the soil solution 52 . Several studies have shown that Al 3+ and H + both have toxic effects on root growth and antagonistic effects on ion uptake 51 . Unsurprisingly, root biomass was significantly lower in N-treated plots.
Our results signified that N deposition significantly decreased soil MBC, which is consistent with the conclusion that N enrichment decreases soil microbial biomass in many ecosystems 26,53 . Several potential mechanisms may help to explain this. Firstly, N additions reduced root biomass, metabolism and C exudate production, with consequent effects on rhizosphere microorganism activity and biomass. Secondly, high N additions may lead to Scientific RepoRts | 7: 2783 | DOI:10.1038/s41598-017-03044-w a condensation of organic compounds with N-containing compounds and/or accumulation of compounds that are toxic to rhizosphere fungi 26 . Unlike soil MBC, MBN concentration was not significantly different between N treatments and the control. This response of MBN to N enrichment may reflect a luxury N-uptake after addition of N. Contrarily, two other previous studies conducted in two bamboo plantations in vicinity of this study site showed that N addition significantly increased both MBC and MBN concentrations, and it was accompanied by an increase in root biomass 10,31,41 . We speculated that the response of root biomass to simulated N deposition may mainly determine the response of microbial biomass to N addition in this region.
In this study, we found a significant decrease in the soil MBC: MBN ratio. Baldos et al. 6 demonstrated that N additions significantly reduced the soil MBC: MBN ratio at different elevations, and speculated that the decline in the soil MBC: MBN ratio corresponded to an increase in the ratio of bacteria to fungi. This is because the C: N ratio of bacteria is generally lower than that of fungi. Other previous studies also corroborated this hypothesis by demonstrating that the bacteria: fungi ratio was increased by N treatment 54 , or significantly and positively correlated with the level of N deposition 55 . This shift in the bacteria: fungi ratio suggests that N fertilization may have altered microbial community composition in our study as well. Further research is warranted to examine in depth the effects of N enrichment on microbial community structure at our study site.
A number of studies on N additions in forest ecosystems reported that the soil respiration rate decreased as a consequence of the decreases in root and soil microbial biomass [23][24][25][56][57][58] . This parallel was detected in this study. In the present study, we found significant and positive correlations between soil CO 2 emissions and both root biomass and soil MBC content (r = 0.595, P = 0.009; r = 0.630, P = 0.028, respectively). In other words, the decrease in root and soil microbial biomass was the likely instigator for the decrease in soil respiration under N addition.
In our study, experimental N additions had a non-significant influence on the contents of soil TOC and TN. Globally, N fertilization generally has an insignificant effect on soil C storage 59 . Theoretically, the addition of N has the potential to increase soil C content by boosting ecosystem net primary productivity 9,11,14 , by reducing the decomposition rate of soil organic matter (SOM) 60 , and by inhibiting soil respiration 24 . However, the decrease in underground C allocation detected in our study likely restricted any enrichment of soil C and N 15,48 . Furthermore, because of the size and heterogeneous nature of the total soil C pool, a long experimental duration is necessary to observe the influence of N fertilization on soil C content 15 . For instance, Huang et al. 14 conducted a 15-year-long field experiment in a second-rotation Pinus radiata plantation in New Zealand and observed that the surface soil C concentration of N-fertilized plots significantly increased after 10 years of treatment, but no significant change at the first 5 years. Therefore, it is possible that soil C and N contents at our study site may display increases over a long-term N addition experiment.
Our results indicate that artificial N additions had no significant influence on soil AP concentrations, which is consistent with a meta-analysis of Deng et al. 61 who found that N additions had no significant effect on soil labile P across global sites. The insignificant increase in AP in N-treated plots may be explained by the AcPh activity. This is supported by the significantly positive correlation between soil AP content and AcPh activity, reported previously by Zheng et al. 62 in two subtropical forests in southern China. In general, most N addition experiments reported increases in the activity of soil enzymes involved in P cycling 30,31,62 . This upregulation indicates that the transformation of organic P to inorganic P is accelerated by N additions, and thus may cause increases in soil AP. However, Tu et al. 31 observed that soil AP concentration in a bamboo forest was significantly decreased by N fertilization, despite an increase in AcPh activity. According to Tu et al. 31 , the increased microbial biomass may promote the immobilization of inorganic P and thus result in a decrease in soil AP. However, in our study, microbial biomass was lowest in the HN treatment. It can be hypothesized that the microbial retention of inorganic P may decline as a consequence. This would lead to a small increase in soil AP concentration, which was indeed observed.
The influence of added N on soil enzyme activity can be attributed to the response of soil microbes, and soil properties such as SOM and pH 63 . Throughout the experimental period, N additions significantly reduced NR activity, but had no significant effects on the activities of urease, invertase, protease and AcPh. In a similar study in a subtropical forest, Wang et al. 30,64 observed that the addition of N in the form of NH 4 NO 3 solution generally restricted NR activity, while other forms of N accelerated NR activity. Evidently, the formulation by which N is applied determines the effects on soil enzyme activity 64 . The decline in NR activity suggests that denitrification may have been restrained by NH 4 NO 3 addition, given the role of NR as a crucial enzyme in denitrification. The negative correlation between NR activity and NO 3 − concentration suggested that the loss of NO 3 − via denitrification was limited by N addition. In our study, soil microbes may have contributed little to the changes in soil enzyme activity, since no significant correlations were observed. Instead, soil pH and nutrients played a more important role. Significantly, NR and AcPh activities were positively correlated to soil pH and TN concentration, and negatively correlated with TOC content. On the other hand, invertase, urease and protease activities were positively correlated to AK, AP and NO 3 − concentration, respectively. These results indicate that N additions tend to increase N and P mineralization rates, but have little effect on C mineralization. Because of inherent variation in soil properties, microbial communities and functional diversity, vegetation type, N formulations, and experimental duration, it is difficult to draw a unified conclusion regarding the responses of soil enzyme activities to N addition.
In the present study, we found that litterfall mass was not correlated to N additions, a finding that is consistent with certain previous studies 20 , but not others 21,65 . Nevertheless, it is worth noting that most previous studies detected that the N content and C: N ratio of litterfall respectively increased and decreased under N addition 20,21,65 .
Conclusion
Artificial N deposition in the study area tended to increase nutrient availability. Conversely, N addition decreased root biomass and soil microbial biomass, thus slowing soil C emissions. The effects of N addition on soil pH and MBC/MBN indicated that soil acidification and altered microbial community size or even its composition were the primary results of increased N deposition. Microbial community composition as such was no investigated yet, and further research is needed.
|
v3-fos-license
|
2023-02-03T06:16:44.567Z
|
2023-02-01T00:00:00.000
|
256502228
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "7d69ad53310202e4a92cc00657713055ba9056b0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46546",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "b43dde29c6514d27ee38af501a28d5fef8e0912f",
"year": 2023
}
|
pes2o/s2orc
|
From magic spot ppGpp to MESH1: Stringent response from bacteria to metazoa
All organisms are constantly exposed to varying levels of nutrients and environmental stresses. This is especially true for unicellular organisms, such as bacteria, which are exposed to perpetual changes of outside environments. Therefore, it is critical to have mechanisms to sense, adapt, and cope with various nutrient deprivations and metabolic stresses. In bacteria, one of the major stress adaptive responses is the “stringent response” [1] that enables bacteria to transition into a semi-dormant state characterized by proliferation arrest, stress survival, and metabolic/transcriptome reprogramming. The stringent response is triggered by the alarmone (p)ppGpp, also termed the magic spot which dramatically increases it level under stresses and binds to various protein targets to mediate the stringent response [1]. The (p)ppGpp level is regulated by the balance of its synthesis and degradation (hydrolysis) by the proteins in the RelA/SpoT homologues (RSHs) superfamily [2]. The best-characterized RSH proteins are the multi-domain long RHS proteins involved in the (p)ppGpp synthesis (RelA) and hydrolysis (SpoT). In addition, there are also single-domain RSH proteins known as small alarmone synthetases (SASs) or small alarmone hydrolases (SAHs) mediating the synthesis and hydrolysis of alarmones, respectively [3]. Interestingly, SAHs have been classified into multiple subgroups including the Mesh1 and Mesh1-L subfamilies [2]. However, the physiological function of these short RSH proteins is still not completely understood.
How do magic spot and alarmone mediate stringent response?
The stringent response involves the coordinated alterations in the bacterial transcription, physiology, and metabolisms. During stringent response, (p)ppGpp plays a central role in this wide variety of changes by binding to its many intracellular protein and RNA targets. While some targets are common across bacteria, other targets are only relevant to particular species and life style stages. The extensive discussion of the diverse (p)ppGpp targets, affected biological processes, and heterogeneity among bacteria has been covered in several outstanding reviews [1,4,5]. While there are common and conserved themes, the specific protein targets and regulatory mechanisms of (p)ppGpp may differ between different bacterial species. For example, while (p)ppGpp affects the global and regional transcription of Escherichia coli through the direct binding to the DksA and the RNA polymerase (RNAP) [6], it affects transcription by affecting the GTP level and binding to specific transcription factors, such as PurR [7] and MglA/SspA complex [8], in other bacteria. In addition, (p)ppGpp can reduce DNA proliferation by binding to DnaG (synthetase of the priming RNA required for DNA replication) [9], affecting the expression and stability of DnaA (replication initiation ATPase) [10] and modulating the supercoiling state of oriC [11]. Furthermore, (p)ppGpp reduces cellular nucleotides through substrate depletion and inhibition of multiple enzymes (PurF, GuaB, and Gmk) mediating the nucleotide biosynthesis [12][13][14]. At the level of protein translation, (p)ppGpp reduces the translation initiation through binding to the initiation factor IF2 [15] and inhibiting ribosomal assembly [16,17]. Therefore, (p)ppGpp binds to numerous protein targets in different biological processes to mediate a wide variety of phenotypic changes in the bacterial stringent response (Fig 1).
MESH1 is the SpoT homolog in metazoan genomes
For many years, (p)ppGpp and the stringent response was thought to be only relevant in bacteria and plants, but not in metazoa [18]. Interestingly, metazoan genomes contain MESH1-Metazoan SpoT Homolog 1, encoded by HDDC3 (HD Domain Containing 3), even though no similar homolog of RelA is found in metazoa. MESH1 can also hydrolyze (p)ppGpp, suggesting a conservation of the biochemical function between MESH1 and SpoT [19]. However, (p) ppGpp was found to exist at a very low level in metazoan (approximately 10 −6 -fold lower than in bacteria) [20]. Therefore, the function and relevant substrate of MESH1 remained a mystery for a long time. Interestingly, the removal of Mesh1 from Drosophila triggered a transcriptome response reminiscent of the bacterial stringent response [19], suggesting a functional conservation across evolution.
MESH1 removal robustly protected ferroptosis of cancer cells
The function of MESH1 in human was uncovered during a forward genetic screen of ferroptosis, a newly recognized form of stress-induced cell death characterized by oxidative stress, iron dependence, and lipid peroxidation [21,22]. While first discovered during the investigation of the cell-killing ability of erastin [21], ferroptosis is now appreciated to be broadly relevant in multiple settings, including tumor suppression, neurodegeneration, liver cirrhosis, and ischemia-reperfusion injuries [22,23]. In addition, ferroptosis also plays a role in the host-pathogen interactions [24]. To identify genetic determinants of ferroptosis triggered by cystine deprivation, we performed several forward genetic screens [25,26] and identified MESH1 as a top hit as its knockdown robustly protected ferroptosis of all tested cells for up to 1 week [27]. Therefore, MESH1 is a novel and robust regulator of ferroptosis of human cancer cells.
Additional phenotypic analysis of MESH1 knockdownphenotypic conservations
In addition to the ferroptosis survival, MESH1 knockdown also triggers additional phenotypic responses. First, there is a robust proliferation arrest in all tested cancer cell lines, tumor spheres, and xenografts in mice (Fig 1). Transcriptome analysis revealed a significant reduction of dNTP synthesis genes and depletion of dNTPs [28], the building blocks of DNA synthesis. In addition, there is a depletion of the ribosomal gene set [29], implying a reduction in the expression of ribosome-associated genes, similar to what has been described in the bacterial stringent response (Fig 1). Among the affected genes is the prominent repression of TAZ, but not YAP, two co-activators of TEAD transcriptional factors downstream of the Hippo pathways. It is important to note that YAP/TAZ proteins are often co-regulated by protein modification and subcellular translocation. Therefore, the selective transcription repression of TAZ mRNAs upon MESH1 knockdown is a novel mechanism that occurs by histone hypoacetylation. Importantly, TAZ restoration significantly mitigates the many phenotypic changes induced by MESH1 knockdown, including the proliferation arrest and dNTP depletion, highlighting the important role of TAZ repression [28]. The integrated stress response (ISR) is an evolutionarily conserved intracellular signaling network that is activated upon stresses to maintain homeostasis and survival [30]. Interestingly, MESH1 knockdown induced integrative stress responses (ISR) as evidenced by the increased levels of ATF4 protein and eIF2α phosphorylation and induction of ATF3, XBPs, and CHOP mRNA [29]. Similar to our observation in human cells, (p)ppGpp has been shown to bind to IF2a and inhibit translation initiation [15] (Fig 1). Another study of MESH1 in Caenorhabditis elegans also reported the induction of unfolding protein response upon MESH1 removal [31]. Therefore, there are significant similarities between the bacterial and metazoan stringent response across evolution, prompting us to coin the term of "metazoan stringent-like response" [32].
The relevant substrate of MESH1 is revealed
While MESH1 knockdown triggers a strong phenotypic response, it is not clear what the relevant substrate(s) are in human cells. We discovered that MESH1 is a NADPH phosphatase capable of removing the 2 0 -phosphate of NADPH to form NADH [27] (Fig 2). While both NADPH and (p)ppGpp contain a purine moiety, the enzymatic activity of MESH1 toward NADPH is the cleavage at the 2 0 -position of ribose, in contrast to the 3 0 -position in (p)ppGpp. The catalytic efficiency (k cat /K M ) toward NADPH (14.4 × 10 3 M −1 s −1 ) [27] is similar to the reported enzymatic activities toward (p)ppGpp (9.46 × 10 3 M −1 s −1 ) [19]. Furthermore, the binding mode of NADPH was captured through the crystal structure of the catalytic inactive MESH1 D66K mutant in complex with NADPH (PDB: 5VXA), providing detailed molecular interactions between MESH1 and NADPH (Fig 2). The NADPH phosphatase activity of MESH1 has been validated in an independent study using the C. elegans homologue of MESH1 [31], verifying NADPH as the relevant metazoan substrate for MESH1. Interestingly, a recent paper from Wang lab has confirmed the NADPH phosphatase activities of SAH in phytopathogen Xanthomonas campestris pv. campestris (Xcc) and sah loss strongly reduced NADH, showing the evolutionary conservation of the NADPH phosphatase activities across multiple kingdoms [33].
Implication for our understanding of stringent response and remaining questions
The stringent response is an ancient and evolutionarily conserved stress response to enable the stress survival of bacteria, providing the primary means by which bacteria survive metabolic stresses. Upon stress relief, the drop in (p)ppGpp ensures the rapid resolution of stringent response to resume normal proliferative states. Similarly, MESH1 knockdown in cancer cells also triggers similar sets of biological processes as well as the phenotypic features of stress survival, proliferation arrests, and transcriptional reprogramming that are highly similar to bacterial stringent response (Fig 1). Therefore, we termed these phenotypic responses to MESH1 inhibition as metazoan stringent-like response [32]. However, while (p)ppGpp binds and modulates various molecular targets in bacteria, several additional pathways (such as TAZ, integrative stress response) connect MESH1 to downstream biological processes (Fig 1). In addition, there is also an evolutionary acquisition of distinct substrates of SpoT versus MESH1. During evolution, enzymes may have altered substrate specificity for a new substrate or broader substrate specificity, as shown for the ability of Xcc SAH to hydrolyze both (p) ppGpp and NADPH [33]. Evolutionary changes in substrate preferences can coincide with the development of new signaling pathways. Despite these advances, much remains unknown about MESH1 and the metazoan stringent-like response. It is unknown what stresses and external stimuli could induce the stringent response in metazoa. Furthermore, it is not clear what the phenotypic response of the MESH1 removal is at the organism level in animals, as reported in C. elegans [31]. MESH1 appears to play a significant role in tumor biology and the inhibition of MESH1 by chemical inhibitors could be used to trigger mammalian stringentlike response to treat various cancers. The field of the metazoan stringent response is in its infancy, and much is to be explored about this important signaling system and its similarity and difference from the bacterial stringent response.
|
v3-fos-license
|
2023-08-29T13:04:02.494Z
|
2023-08-29T00:00:00.000
|
261240779
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3617684",
"pdf_hash": "f7c8b857139a11c04b0d5186cba5e1180e4f0987",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46548",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"sha1": "8e641518e11f7a9b5f4ab51aa8b5463445e9ca24",
"year": 2023
}
|
pes2o/s2orc
|
The Effect of Interocular Contrast Differences on the Appearance of Augmented Reality Imagery
Augmented reality (AR) devices seek to create compelling visual experiences that merge virtual imagery with the natural world. These devices often rely on wearable near-eye display systems that can optically overlay digital images to the left and right eyes of the user separately. Ideally, the two eyes should be shown images with minimal radiometric differences (e.g., the same overall luminance, contrast, and color in both eyes), but achieving this binocular equality can be challenging in wearable systems with stringent demands on weight and size. Basic vision research has shown that a spectrum of potentially detrimental perceptual effects can be elicited by imagery with radiometric differences between the eyes, but it is not clear whether and how these findings apply to the experience of modern AR devices. In this work, we first develop a testing paradigm for assessing multiple aspects of visual appearance at once, and characterize five key perceptual factors when participants viewed stimuli with interocular contrast differences. In a second experiment, we simulate optical see-through AR imagery using conventional desktop LCD monitors and use the same paradigm to evaluate the multi-faceted perceptual implications when the AR display luminance differs between the two eyes. We also include simulations of monocular AR systems (i.e., systems in which only one eye sees the displayed image). Our results suggest that interocular contrast differences can drive several potentially detrimental perceptual effects in binocular AR systems, such as binocular luster, rivalry, and spurious depth differences. In addition, monocular AR displays tend to have more artifacts than binocular displays with a large contrast difference in the two eyes. A better understanding of the range and likelihood of these perceptual phenomena can help inform design choices that support high-quality user experiences in AR.
INTRODUCTION
Designing new display systems often requires understanding whether and how the display's visual limitations adversely affect the user experience.Display systems for augmented reality (AR) pose a unique set of challenges because they aim to merge virtual information into the user's natural vision using a system with demanding 1:2 • M. Wang et al.
Fig. 1.Binocular display systems present separate images to each eye.These systems are commonly used for augmented reality (AR) and, due to hardware and software limitations or imperfections, are subject to unintended spatial, temporal, or radiometric differences between the images shown to the two eyes.In this illustration, the user's view of an icon (image credit [48]) has higher contrast in the left eye than in the right eye.These differences may affect their perception of brightness, contrast, luster, rivalry, and depth.
design specifications (e.g., a wearable optical see-through near-eye display system) [27].When these wearable systems are binocular, they may employ independent displays and optics for the two eyes, introducing the potential for spatial, temporal, and radiometric differences in the virtual content that each eye sees (Figure 1).Here, we aim to explore the range of perceptual effects that can result when the user of an AR system receives a higher intensity image in one eye than the other.
From a display engineering perspective, differences between the left and right eye's views can be desirable or detrimental.Importantly, binocular display systems enable the presentation of images with binocular disparities-the natural spatial offsets between the two eyes' views that can elicit a compelling sense of depth via a perceptual process called stereopsis.However, patterns of imperfections in display panels (sometimes called mura) and spatial distortions introduced by optical architectures may also differ between the two eyes [26,28,38,50].These factors can introduce additional interocular differences that are not intended by the designer, and understanding their potential perceptual consequences is key for optimizing the user experience.Our understanding of these perceptual consequences, however, is still in the early stages.
Basic vision science studies, using simple shapes and gratings as stimuli, suggest that large interocular differences in brightness, contrast, and pattern between the two eyes are likely to elicit troublesome percepts in which the stimulus appears to shimmer or alternate in appearance over time (see [3,54] for review articles).However, small differences can go unnoticed [14].Recent applied research has begun exploring whether and how these phenomena might affect the appearance of AR content.For example, in AR systems with a small eyebox, the two eyes can be subject to different patterns of luminance vignetting (non-uniformity), which may result in degraded image quality [4].However, a recent perceptual study suggests that a binocular AR display system with different vignetting patterns between the two eyes results in reduced salience of these artifacts as compared to a monocular system [8].This prior work shows that certain types of radiometric differences may not be detrimental; in fact, it may be possible to take advantage of binocular combination to achieve certain desired design goals (i.e., better display uniformity).However, prior work has focused largely on assessing just one aspect of binocular perceptual experience at a time [2,9,10,20,29,53], while it is likely that binocular image differences cause multi-faceted perceptual effects that are not well captured by a single perceptual measurement.
Here, we adopt the term dichoptic to refer to stimuli that differ radiometrically between the two eyes (e.g., differ in luminance, contrast, or color).We aim to contribute a better understanding of the perceptual phenomena that occur when viewing dichoptic imagery in AR, in order to support well-informed display design decisions.
We conducted two perceptual experiments to evaluate the implications of dichoptic imagery for user experience in AR systems using a desktop monitor setup that simulates AR imagery.In addition to exploring dichoptic contrast in binocular AR systems, we include conditions simulating monocular AR viewing (i.e., systems in which only one eye sees the displayed image), because monocular designs may be sufficient for some AR applications.Drawing on the basic vision science literature, we identified several perceptual factors pertinent to the appearance of dichoptic imagery: perceived brightness, contrast, luster, rivalry, and depth (Figure 1).Using simulated AR stimuli, we studied all the aforementioned effects together with a battery of subjective response prompts.We examined how these effects varied for different levels of interocular difference and different stimulus patterns.Studying these factors all together, rather than focusing on a single effect, enables us to characterize a broad gamut of potential perceptual consequences to AR display design.
While many of the presented results are relevant to virtual reality (VR) as well, we focus on optical seethrough AR in this report because the optical and electronic demands on such systems necessitate challenging tradeoffs that can be informed by a deeper understanding of dichoptic perception.For example, because AR devices often have light pass through from the environment while VR devices block visible light from the environment, AR devices may need higher intensity light sources for content to be visible when operating in bright environments (video see-through AR devices are an exception).AR imagery presented on optical see-through devices also has an idiosyncratic appearance because the optically overlaid virtual content is often semitransparent.Stimuli with this appearance warrant dedicated investigation as our perceptual interpretations of them are complex [17,57] and underexplored.
Our primary findings are as follows: (1) Across a broad range of visual stimuli, participants judged the appearance of dichoptic images to differ from non-dichoptic images with respect to all five perceptual factors tested.We found that luster was the perceptual effect reported most often with dichoptic stimuli.(2) As the contrast difference between the two eyes increased, the prevalence of all dichoptic perceptual effects increased.
(3) Monocular viewing (i.e., viewing display content in only one eye) resulted in a similar set of perceptual results, but with a higher prevalence.
RELATED WORK
In this section, we briefly summarize the range of perceptual phenomena associated with viewing dichoptic stimuli that differ in luminance or contrast between the two eyes.We focus on achromatic imagery, but these effects are also relevant for chromatic stimuli, which we will take up briefly in the Discussion.
Brightness
For most people, closing one eye does not make the world appear any dimmer under normal viewing conditions.This observation suggests that perceived brightness is not a simple average of the luminance levels reaching the two eyes and has motivated a range of psychophysical research characterizing dichoptic brightness perception.Brightness perception is well-modeled as a weighted combination of the inputs to the two eyes, with the weights varying depending on the context.For example, when simple stimuli (e.g., uniform gray disks) with different luminance levels are shown to the two eyes, we can ask what the resulting perceived brightness is.Generally, the stimulus with a greater contrast is found to dominate the binocular brightness percept (sometimes termed "winner-take-all") [2, 10, 30] (Figure 2(a)).That is, if both stimuli are bright compared to the background (increments), the binocularly perceived brightness tends to match the brighter stimulus and if both stimuli are dark compared to the background (decrements), the perceived brightness matches the darker one.However, percepts can shift toward binocular averaging under certain viewing situations [10].For example, in Fechner's paradox, viewing a dichoptic image pair with different luminance levels in the two eyes results in a darker percept than if the observer closes one eye and just views the brighter of the pair monocularly [30].Under certain viewing conditions, the brightness percept can be more like "loser-take-all" and biased toward the dimmer image.In particular, if additional contours or edges are added to the stimulus with lower contrast, the perceptual biases can switch toward that stimulus [10,30].These observations motivate the need to understand brightness perception in AR systems.For example, if Fechner's paradox or loser-take-all binocular combination occurs for AR displays, then it may be better in some cases to have a monocular AR display system than a binocular one with dichoptic brightness.
Contrast
A related line of research has asked how people perceive the contrast of dichoptic stimuli when the average luminance is matched between the two eyes.Contrast refers to the range between the brightest and darkest regions of an image.For example, research participants can be asked to match or rate the perceived contrast of a binocularly viewed sine wave grating when the two eyes view gratings with different contrast levels (Figure 2(b)).
Research using this type of stimulus has shown that dichoptic contrast perception also tends to follow a winnertake-all pattern similar to dichoptic luminance perception [9,29].This finding is not surprising given that a sine wave grating can be thought of as a set of alternating luminance increments and decrements.Even when the phase between the two dichoptic gratings differs, the perceived contrast is still biased toward the higher contrast grating [9,23].From a display design perspective, these findings seem promising because they suggest that winner-take-all binocular contrast perception can hold even for stereoscopic stimuli with binocular disparities.However, like luminance perception, contextual effects can alter the balance between the two eyes for perceived contrast.For example, recent work showed that the dichoptic contrast percept can be strongly influenced by a lower contrast stimulus if it is embedded within a contour, similar to brightness percepts [52].These modulations were also found to depend on the spatial properties of the stimulus: the influence of the contour was stronger for simple grating-like stimuli with a single orientation and weaker for other more complex stimuli.
Luster
If our perception of dichoptic stimuli could be completely modeled as a weighted mixture of the luminance and contrast of the two eyes inputs, then the challenge of predicting these percepts would just be a matter of determining the appropriate weights for a given stimulus.However, this is not the case.There are unique forms of binocular appearance that can emerge with dichoptic stimuli, such as binocular luster.The lustrous appearance of dichoptic stimuli is subjectively described as shimmery, shiny, or metallic (Figure 2(c)).A classic stimulus targeted to elicit binocular luster is a pattern with opposite contrast polarity in the two eyes (i.e., a luminance increment in one eye and decrement in the other eye), but binocular luster can also be elicited when the two eyes have unequal increments or decrements [36,53].For AR applications, luster may be troublesome if it interferes with the perceived realism or material properties of the stimulus.However, it may also be a tool that designers desire to leverage, for example, to make a virtual object stand out visually or to break color metamerism [16] (see [54] for review).
Binocular Rivalry
Another binocular phenomenon that occurs with dichoptic imagery is rivalry.During binocular rivalry, the appearance of a stimulus changes over time.When a stimulus elicits binocular rivalry, it may appear to match one or the other eye's input at any moment in time, or it may be perceived as a mixture (Figure 2(d)).For example, the binocular percept may be a patchy mix of the two eyes inputs, in which some parts of the percept look like one eye's input while other parts look like the other eye's input [46].To study binocular rivalry, a pair of highly dissimilar images (e.g., gratings with different orientations or two disparate images) are often used, but rivalry can also be elicited by more subtle interocular differences [42].For many binocular AR devices, it is unlikely that the content seen by the two eyes is extremely dissimilar, but for monocular devices that show virtual content to only one eye rivalry may be more of a concern [40].It is thought that the relative strength of each eye's input determines rivalry dynamics, and the eye with the stronger stimulus (e.g., brighter, higher contrast) is the predominant percept [31].This observation holds true for simple stimuli, but not necessarily for more complex stimuli [47].Compared to the other binocular effects covered in this section, salient rivalry is likely to be universally considered as an undesirable visual artifact that compromises the visibility of the displayed content in AR.
Depth
It is well established that the visual system can use positional differences in the two eyes' images (binocular disparities) to infer depth information.It has been recently shown, however, that dichoptically tonemapped natural imagery with interocular contrast and luminance differences can generate a sense of depth as well [51,60].However, this depth effect has been elusive to vision science research, as it is harder to elicit consistently compared to binocular luster, rivalry, and stereoscopic depth (from binocular disparity).Psychophysical studies have demonstrated an anomalous depth effect (also referred to as the "sieve effect" and "rivaldepth") with anticorrelated images in which a white pixel in the left eye matches to a black pixel in the right eye, but there is no binocular disparity [22,35,39] (Figure 2(e)).There is individual variation in this anomalous depth effect, however, such that some participants can perceive a reversal in depth but not others [20,44].It also is highly dependent on the stimulus configuration [19,20].These depth effects may also be associated with luster and rivalry.For example, one small study found that these three effects could all be induced with the same amount of dichoptic luminance difference by simply changing the stimulus size [41].Depending on the use case, this depth effect may be an additional tool for display designers to enhance depth impressions, since people often underestimate the distance of objects simulated via near-eye displays [12].On the other hand, any anomalous depth effects may also be problematic for tasks that require fine depth accuracy.
Modeling Dichoptic Percepts
Considering the importance of the perceptual appearance of dichoptic imagery for display design, it would be useful to be able to predict binocular appearance given any pair of input images for the two eyes.Efforts have been made to develop models to predict various aspects of binocular percepts, but no model exists yet that has been shown to reliably predict a range of perceptual factors at once.For example, some models of binocular combination focus on implementing the mechanisms of early stages of interocular interaction (e.g., interocular suppression) based on basic stimulus properties (e.g., contrast) [11,15,24,32], while other models, particularly those focused on rivalry, employ higher-level frameworks such as perceptual inference and decision making [6,21].However, oftentimes these perceptual models intend to predict only a single aspect of appearance, and most prior work has focused on using controlled stimuli targeted to elicit one type of effect only.Some prior work has explored the perceived image quality of natural dichoptic images, as an extension of conventional 2D image quality metrics [5,56].Recent approaches in this domain have incorporated models of binocular processing; however, the evaluations focus on predicting a single dimensional measure of 3D image quality [7,13,45].To support models that can predict the multi-faceted appearance of dichoptic stimuli, a better understanding of how multiple perceptual effects might co-occur is needed.
PERCEPTUAL EXPERIMENTS
In this article, we present the results of two perceptual experiments designed to examine all five of the aforementioned perceptual factors in dichoptic appearance together.We aim to provide a more holistic picture of what dichoptic stimuli may look like to users and in this way inform display design decisions.For example, it would be beneficial to know if there is any systematic relationship between the different perceptual effects.Are different effects associated more or less with different amounts of interocular image differences?Is there a "sweet spot" for optimal user experience where perceptual artifacts like binocular rivalry are minimized but the sense of depth or contrast is enhanced?How does the perceptual outcome change when viewing different spatial patterns?
In Experiment 1, we examine how spatial complexity and interocular contrast differences influence the occurrence of the different perceptual effects.This experiment uses conventional psychophysical stimuli.It aims to validate our multiquestion experimental procedure and understand the potential relationships between the perceptual factors of interest.Experiment 1 was conducted as part of a larger psychophysical study, and some non-overlapping results from this study were already reported in [52].In Experiment 2, we leverage the paradigm from Experiment 1 to more directly examine how dichoptic imagery varies in appearance in optical see-through AR scenarios.We simulated stimuli in which the AR content is brighter in one eye than the other, which results in both interocular differences in luminance and contrast.Contrast for AR content was defined as the ratio of the maximum AR luminance over the maximum luminance of the background.
Participants
Two groups of 34 adults participated in Experiment 1 (23 females, ages 19-32 years) and Experiment 2 (25 females, ages 18-34 years).All participants had normal or corrected-to-normal visual acuity and normal stereo vision (measured with the Randot Stereotest).The experimental procedure was approved by the Institutional Review Board at University of California, Berkeley, and all participants gave informed consent prior to beginning the study.
Experimental Setup
Stimuli were displayed on a desk-mounted mirror haploscope (Figure 3(a)) to allow for independent presentation of images to the left and right eyes (presented on two LG 32UD99-W LCD displays).This system enabled precise control over the stimulus appearance in each eye without potential interference of optical imperfections that are common in wearable systems (e.g., optical distortions or vignetting).The viewing distance was 63 cm, and participants were head-fixed with a chin rest.The spatial resolution of each display was 3,840 × 2,160 pixels per eye (∼60 pixels per visual degree).The experiment room was dark during the experiment.
To calibrate the displays' luminance, we used a PR650 spectrophotometer to measure the maximum white of each display.Then, we manually adjusted the brightness settings of the displays to achieve the best possible match for the maximum luminance.This adjustment resulted in a maximum luminance of 168 cd/m 2 for the left eye (white point (x, y) = 0.31, 0.31) and 164 cd/m 2 for the right eye (white point (x, y) = 0.32, 0.32).We then empirically measured the gamma nonlinearity of each display so that we could adjust the brightness and contrast of all stimuli in units that were linear with respect to the luminance output.We determined the grayscale gamma nonlinearity of each display perceptually using dithering, and generated a look-up table with the same gamma correction applied to each of the RGB channels.The resulting mid-gray luminance was verified with the spectrophotometer to be approximately half of the maximum luminance and was within 8 cd/m 2 between the two displays.The match between the two displays is also supported by the fact that the secondary perceptual effects such as luster and rivalry were almost never reported for the non-dichoptic stimuli in the experiment.
All stimuli in Experiment 2 were shown in standard sRGB colorspace.While this color gamut is likely representative of typical AR imagery, it has a more limited chromatic range than real natural environments.We adopt a standard of representing light levels in units of linearized pixel intensity in which the minimum light level of the display is assigned a value of 0 and the maximum is assigned a value of 1.With LCD displays, however, some light is always emitted from the display panel backlight, even when the pixel levels are set to 0.
Task
In a series of trials, participants were presented with pairs of stimuli to compare.One stimulus was presented on the top half of the screens and the other on the bottom half.One stimulus was identical in the two eyes (non-dichoptic) and the other stimulus (usually) comprised a dichoptic pair as described below.We call this latter stimulus the reference.Participants used keyboard presses to adjust the contrast of a target pattern in the non-dichoptic stimulus to match the appearance of the target in the reference stimulus as best as they could (Figure 3(a)).They could look back and forth between the stimuli and could spend as much time as they needed to obtain the best match.The positions of the reference stimulus and the adjustable non-dichoptic stimulus were swapped for half of the participants, meaning that half of the participants saw the reference stimulus always on the top and the other half saw it always on the bottom.
After participants indicated that they had found the best match, the stimuli disappeared and they were shown several prompts to assess which, if any, perceptual differences there were between the reference and their best match (Figure 3(b)).They were first asked whether they were able to find an exact match or not.If the answer was no, they were asked to judge the contrast, brightness, luster, rivalry, and depth of their best match against the reference stimulus.The prompts shown in Figure 3(b) were presented sequentially on the screen.Response options to each prompt were top, bottom, same, and unsure.Responses of "top" or "bottom" indicated which stimulus was associated with the stronger perceptual effect.Based on pilot testing, we selected wording to describe luster and rivalry that best matched how participants described these effects (third and fourth questions, respectively).For the luster, rivalry, and depth questions, people were instructed to use the response option "same" when neither stimulus had the effect.Prior to starting the experiment, participants were shown images to help them understand what was meant by rivalry and luster.We showed them orthogonally oriented gratings in each eye to demonstrate how binocular rivalry looks.A square stimulus with different shades of gray in each eye was used to explain what luster looks like.Participants also completed 10 practice trials to get familiar with the task.
Stimuli
In Experiment 1, we used grayscale pattern stimuli to probe the nature of people's responses to the visual appearance questions.In Experiment 2, we used stimuli designed to mimic the appearance of optical see-through AR systems.
Experiment 1.
We used two common types of vision research stimuli in this experiment: vertical sine wave gratings with a spatial frequency of 5 cycles-per-degree (cpd) and a 1/f ("pink") noise pattern with a broad frequency amplitude spectrum similar to that of natural images (Figure 4(a)).In a previous experiment, we found differences between the dichoptic contrast percepts of these grating and noise stimuli [52].Therefore, in addition to these two stimuli we also included three intermediate noise patterns that shared some similarities with the grating patterns (also shown in Figure 4(a)): we matched the pixel intensity distribution of the 1/f noise pattern to the grating through histogram matching (histogram-matched), we bandpass-filtered the 1/f noise image and only kept spatial frequencies between 4 and 6 cpd (5 cpd bandpass), and we repeated the first row of the 1/f noise image for all rows to create a broadband vertical grating (broadband).Each stimulus image was 8-bit and spanned the full range of 0-255 bit levels.
Under realistic viewing conditions, targets of visual inspection are rarely viewed in isolation (i.e., against a uniform background).Instead, the surrounding context provides additional visual information that may play a role in determining the appearance.Thus, each image of the stimulus consisted of a 2°circular target of interest embedded in a binocular 4°by 4°surround region with the same type of spatial pattern.For example, the grating target was embedded in a grating pattern with the same spatial frequency and orientation (Figure 4(b)), whereas the 1/f noise was embedded in 1/f noise.To vary the contrast of each eye's target region, we normalized the image range from 0 to 1 and rescaled the values around the mean value as follows: where I 0 denotes the original image, μ denotes the mean pixel intensity of that image, I denotes the new image, and c is a scalar value that determines the amount of contrast reduction.To generate the reference stimuli, the contrast (c) of the target for the left and right eyes (c L and c R ) was set to 0, 0.25, 0.5, or 1, resulting in 16 possible combinations between the two eyes (e.g., c L = 0.25 and c R = 1, c L = 1 and c R = 0.5).We did not present a stimulus with zero contrast in both eyes, so only 15 combinations were used.Of these, six combinations had c = 0 in one eye and c > 0 in the other eye, which we refer to as the special case of dichoptic stimuli with a monocular target.The other six dichoptic combinations were non-monocular (visible target in both eyes) (Figure 4(b)).The remaining three combinations resulted in non-dichoptic stimuli (c l = c R ) that were used as control/catch trials.The contrast of the square outside of the target region was always equal to the average contrast of the two eyes' target regions and non-dichoptic.All stimuli were shown on a uniform mid-gray background.In total, there were 75 trials (5 stimulus patterns × 15 contrast combinations).
Experiment 2.
We created stimuli that simulated AR visual experiences by compositing a virtual icon with a naturalistic background image.The virtual icon was then used as the target for the perceptual task.We tested four different patterns for the virtual icons.To have a baseline comparison with Experiment 1, we included the 5 cpd grating and 1/f noise pattern again.Based on the results of Experiment 1, we were interested in understanding if more realistic AR content would appear similar to the two baseline stimuli or not.We thus selected two different icon patterns from an existing library [48], which we refer to as simple and complex icons (Figure 5(a)).The grating, noise, simple, and complex icon stimuli were all overlaid on an image of a natural background from the SYNS dataset [1] (Figure 5(b)).The same background image was used for all icons to focus on the potential perceptual effects associated with each unique icon.Similar to Experiment 1, target regions (the icons) subtended 2°circles and the background region subtended a 4°square.
The contrast adjustment of the icons was performed similarly to Experiment 1, with some key differences to more closely simulate the joint contrast/luminance modulations that can occur when one display in an optical seethrough AR system is brighter than the other.In particular, since these systems use additive light, we simulated the addition of the icon image onto the natural background.Pixel values of the background image (B) were scaled down by a factor of 2 so that only half of our display's dynamic range was used to simulate the background and the other half could be used for the icons.This effectively provides a maximum AR contrast of 2:1 against the background.The normalized 8-bit, three-color channel icon images (A) were also downscaled by a factor of 2 before being multiplied by the different scale factors, such that the maximum normalized pixel value in the combined image was equal to 1: All contrast adjustments were made in linear units based on the assumption that all color channels were encoded with a gamma non-linearity of 0.45 (e.g., normalized bit values from the background and icon images were exponentiated to 1/0.45 prior to being combined).We used the same contrast combinations as described for Experiment 1 for the AR target in this experiment (Figure 5(c)).The surround region was identical in the two eyes.In total, there were 120 trials in Experiment 2 (4 stimulus conditions × 15 luminance combinations × 2 repeats).
Catch Trials.
On some trials in both experiments, we presented a non-dichoptic reference to check that participants were following the instructions.We used the matching performance during these trials to exclude participants who were not performing the task reliably.Two participants from Experiment 1 and three participants from Experiment 2 were excluded because their matching error exceeded 1.5 times the interquartile range of all participants' errors.For the results presented below, N = 32 for Experiment 1 and N = 31 for Experiment 2.
Statistical Analyses
For the contrast matching task, the results for which non-dichoptic stimulus produced the closest perceptual match to the reference stimulus were fitted with a standard binocular contrast combination model.The model assumed that the binocular contrast percept was a weighted average of the contrast shown to the left and right eyes (see Figure 8(a)).The weights for the two eyes were constrained to sum to one, such that this model contained only one free parameter.Best fitting weights for each participant were determined with a grid search that minimized the square root of the mean squared error between the data and the model prediction across the trials.An analysis of the estimated weights for the stimuli in Experiment 1, in combination with additional experiments and stimuli, were previously reported elsewhere [52] and are summarized briefly in the following Results section.The estimated weights for Experiment 2 are reported here, and were analyzed with two one-way ANOVAs to examine the effects of stimulus type and interocular difference.Follow-up pairwise comparisons were done using t-tests with Bonferroni correction.
For the perceptual questions following the contrast matching task, we used mixed-effect logistic regression models to fit the responses and evaluate which stimulus properties were associated with different perceptual reports, with participants modeled as random intercepts.For each analysis, we include tables that report the coefficients, 95% confidence intervals, t statistics, and p values associated with a set of stimulus properties modeled as fixed effects.A qualitative examination of the data did not suggest that any notable interactions were present, so for simplicity we do not investigate or report interactions.For some analyses, we use a separate model to examine the difference between responses to monocular targets and other dichoptic stimuli so that we can treat monocular versus non-monocular targets as a categorical predictor.
Experiment 1
4.1.1Contrast Matching.The contrast matching results from this experiment were already reported in detail elsewhere [52].In brief, for most stimulus types people tended to match the non-dichoptic stimulus to the higher contrast image seen by either the left or the right eye.That is, the higher contrast image dominated binocular perception in a close to winner-take-all fashion.However, the individual variability was high for the 5 cpd grating stimulus in particular: for this stimulus, some participants' data were more consistent with simple averaging or even a loser-take-all pattern in which the lower contrast stimulus dominated the binocular percept.These results highlight the possibility that binocular contrast percepts may vary depending on the stimulus properties, which we will return to in the analysis of the contrast matching results for Experiment 2.
Probability of Finding an Exact Match.
The data indicated that the best perceptual match participants could find was not always an exact perceptual match.The probability that participants could find an exact perceptual match to the reference stimulus varied systematically as a function of the interocular contrast difference and the stimulus pattern, although there was substantial individual variation (Figure 6). Figure 6(a) shows how the magnitude of the contrast difference between the two eyes was associated with dramatic changes in the probability of a perceptual match.To characterize the contrast differences, we use the ratio of the higher contrast target to the lower contrast target, which we call the interocular contrast ratio (ICR).Our rationale is that human vision tends to follow Weber's Law-for example, the amount of luminance difference required to detect a luminance change is proportional to the background luminance-and as such, this ratio is likely to reflect the salience of the contrast differences in our stimuli [14,60].An ICR of 1 means that the reference stimulus had the same target contrast in each eye and was non-dichoptic.As expected, participants were able to find an exact perceptual match close to 100% of the time when this was the case.A larger ratio indicates a larger contrast difference between the two eyes (i.e., an ICR of 4 means one eye's contrast is four times the contrast of the other eye).As ICR increased from 1 to 4, participants were on average less likely to find an exact match, with only about a quarter of the stimuli resulting in an exact match when the ICR was equal to 4. We ran a logistic regression model with ICR (excluding monocular trials) and stimulus type as regressors.We can take the coefficients from the regression model (Table 1) and exponentiate them to obtain the odds ratios for the predictors.The coefficient of −1.62 for the ICR, for example, translates to an odds ratio of 0.20, meaning that for each one-unit increase in ICR, the odds of getting an exact match is 0.20 times less.
Next, we examine the results when the reference stimulus was monocular.Monocular reference stimuli, in which one eye had a target contrast of 0 (i.e., uniform gray embedded in a binocular surround region), have an ICR of infinity (Figure 6(a), labeled as monocular).For these stimuli, we ran a separate regression model containing a categorical predictor on a subset of the data, comparing just the monocular trials to the non-monocular trials with an ICR of 4. The results suggest that the probability of finding an exact perceptual match was not notably lower for monocular stimuli as compared to dichoptic stimuli with a large ICR (Table 2).
The probability of finding exact matches was less affected by stimulus type (Figure 6(b), Table 1).For this analysis, we used the grating stimulus as the baseline and examined the odds associated with the four other stimulus types.We found that the probability associated with the grating stimulus was not significantly different from the 1/f noise stimulus, but was significantly different from all three intermediate patterns.Compared to the grating stimulus, the odds ratio for the histogram-matched noise, bandpass noise, and broadband grating stimuli were 0.35, 1.74, and 0.27, respectively.This result suggests that the spatial pattern of the stimulus may influence the chances that people see phenomena like luster, rivalry, and depth in dichoptic stimuli, but the effect appears to be smaller compared to the effect of ICR.
Perceptual Appearance of Dichoptic Stimuli.
What perceptual effects did people experience when they were unable to find an exact perceptual match by varying stimulus contrast, and how do these perceptual effects vary across different interocular contrast ratios and stimulus types?To answer these questions, we next look at participants' responses to the follow-up questions about perceived contrast, brightness, luster, rivalry, and depth.
First, as a sanity check, we calculated which stimulus participants reported seeing luster and rivalry in.We expected participants to select the dichoptic reference stimulus as the one that elicits these perceptual phenomenon, because the adjustable stimulus was always non-dichoptic and should not elicit luster or rivalry.The results were consistent with this expectation.When luster was detected in one of the stimuli, the reference stimulus was selected 98% of the time.For rivalry, it was 94%.When a depth difference was detected, participants also tended to indicate that the dichoptic stimulus was closer (84% of the time).For the brightness and contrast questions, we did not expect participants to systematically select either stimulus because we do not have a strong hypothesis that dichoptic stimuli should appear systematically higher or lower in contrast or brightness than non-dichoptic ones.Indeed, the choices for these prompts were closer to chance (56% and 61% of the time, respectively).
For the main analysis, we re-coded the data to simply indicate whether people perceived a difference or not for each perceptual factor.When participants made any response other than "same" for a given prompt, a perceptual difference was considered to be present.The average percentage of "unsure" responses across all the prompts was low (mean = 1.19%, standard error = 0.19%, median = 0.53% of all responses) and similar across all questions, and the results do not notably change if we omit these responses.
When the reference was non-dichoptic (ICR = 1), there were minimal perceptual differences, as expected from the analysis of exact matches (Figure 7(a)).That is, on these trials participants were unlikely to indicate any perceptual differences between the two stimuli.As the ICR increased, all five effects started to become more noticeable.The most common perceptual differences across all ICR levels were binocular luster, depth, and rivalry, and 4; degrees of freedom = 1,438).Right: Each effect's difference in occurrence between non-monocular (ICR = 4) and monocular dichoptic trials (degrees of freedom = 1,278).Data are reported in the same format as Tables 1 and 2.
in the order from most likely to less likely.The results also suggest that different effects tended to co-occur to some extent, because the proportions for high ICR trials sum to a value greater than 1.Indeed, experiences of these perceptual phenomena were not mutually exclusive.Across all participants, the mean number of perceptual differences per dichoptic trial was greater than 1, with marginal statistical significance (mean = 1.27, median = 1.17, t (31) = 1.94, p = 0.06), and this amount increased with increasing ICR (e.g., the mean and median were 1.60 and 1.40 for an ICR of 4, t (31) = 3.38, p = 0.002).
We used five logistic regressions to examine the occurrence of each perceptual effect separately.Table 3 (left) shows the association between ICR (ICRs of 1-4) and the presence of each perceptual effect.The ICR coefficients for all effects were positive and statistically significant, suggesting that the occurrence of all perceptual effects increased systematically as ICR increased.Based on the magnitude of the ICR coefficients, luster had the largest increase.In terms of odds ratios, we observed about a factor of 4.2 increase in the odds of luster for each one-unit increase in ICR, as compared to a 3.5 increase in rivalry, 3.6 in depth effects, and 2.2 and 1.9 increases in odds of contrast and brightness effects, respectively.
Next, we directly compared the trials with a monocular target to trials with a non-monocular high ICR target (ICR = 4) (Table 3, right).The fits to this subset of trials indicate that the monocular targets were associated with a relative increase in binocular rivalry (odds ratio of 1.67), while no other effects were notably different.
Lastly, we looked qualitatively at how the perceptual effects differed among different stimulus patterns.Figure 7(b) shows the occurrence of perceptual differences for each stimulus type out of all the dichoptic trials (i.e., all trials except when the ICR was equal to 1).The lighter the color in the matrix, the higher the likelihood that there was a difference associated with each effect (x-axis label) for the given stimulus pattern (y-axis label).The results suggest that different stimulus patterns may have a different set of perceptual effects.For example, the grating stimulus had fewer perceptual differences overall, and a slightly higher rate of rivalry than luster.The more complex patterns were associated with relatively higher rates of luster, all of which exceeded the occurrence of rivalry.Taken together, this set of results suggests that rivalry may be a concern particularly for monocular stimuli and for simple grating stimuli.These results serve to highlight the importance of investigating these perceptual effects using visual stimuli that mimic the visual appearance of genuine AR experiences, which we will describe in the next section.
Experiment 2
The stimuli used in Experiment 2 were designed to more closely mimic the visual experience of optical seethrough AR, with natural backgrounds, partially transparent imagery, and a coupling of contrast changes with stimulus brightness.
Contrast
Matching.First, we looked at the contrast matching results.We fitted the contrast matching data using a simple weighted combination model where the weights for the high and low contrast eye add up to 1 (Figure 8(a)).In Figure 8(b), the weights assigned to the higher contrast stimulus across all trials for the different stimulus types are shown in the top panel, and the weights for the higher contrast stimulus for different ICRs (except ICR = 1 where the model is unconstrained) across all stimulus types are shown in the bottom panel.
The results for the different stimulus patterns are all generally consistent with previously published results, in which the higher contrast stimulus dominates (the weight on the higher contrast image was near 1, approximating a winner-take-all binocular combination rule).However, a one-way ANOVA showed that there were significant differences among the different stimulus types for the weights (F (3, 90) = 26.28,p < 0.001).Followup pairwise t-tests revealed all pairs of stimulus types were significantly different from each other, except for the grating and noise patterns (Table 4, left).This suggests that stimulus pattern plays a significant role in determining the bias in binocular combination, although the average weight across stimulus types was always greater than 0.78.
The effect of ICR on contrast matching was also significant (F (2, 60) = 408.73,p < 0.001).All levels of ICR were significantly different from each other, suggesting that ICR has a robust influence on the perceived binocular contrast (Table 4, right).Importantly, monocular trials were associated with significantly greater high-contrast weights than the other conditions, showing evidence of Fechner's paradox for AR stimuli.Indeed, when the ICR was the lowest dichoptic value (2) the binocular contrast combination was closer to averaging than winner-take-all.
Probability of Finding an Exact Match.
The effect of ICR on the probability of finding an exact perceptual match was similar to Experiment 1 for the AR-like stimuli used in Experiment 2 (Figure 9(a), Table 5).In this experiment, the probability of finding exact matches when the contrast ratio was high (ICR = 4) or when the stimulus was monocular was quite low.For each unit increase in ICR, the odds of getting an exact match were 0.12 times less.For the four AR icon patterns used in this experiment, only the complex icon condition was associated with a match probability that was significantly different from the grating baseline, and this modulation was again substantially less than the differences associated with ICR (Figure 9(b), Table 5).When comparing the trials with a monocular AR target against the trials with a target ICR of 4, we found the monocular AR was associated with a significantly lower probability of finding an exact match (Table 6), with an odds ratio of 0.37.
Taken together, these results replicate and extend the findings from Experiment 1.The results indicate that there is substantial variation in the appearance of dichoptic AR stimuli that differ in interocular contrast, and that these appearances are subtly but lawfully modulated by the stimulus pattern.The non-monocular trials were used as the baseline.Data are reported in the same format as Table 5.
Perceptual Appearance of Dichoptic Stimuli.
We performed a set of analyses on the perceptual appearance responses mirroring those described for Experiment 1. Similar to Experiment 1, the response patterns for luster and rivalry fit our expectations: when these effects were present, participants indicated that they saw them in the dichoptic reference 98% of the time.When a depth difference was detected, people indicated that the dichoptic stimulus was closer 98% of the time.For the brightness and contrast questions people selected the dichoptic stimulus as higher contrast or brighter 65% and 74% of the time, respectively.Again, we coded an effect as not present if participants responded with "same, " and present if they responded with one of the other three options.The number of "unsure" responses per participant was again low (mean = 0.78%, standard error = 0.10%, median = 0.33% of responses) and similar across questions.As in Experiment 1, recoding these responses did not change the pattern in the results.
When looking at whether or not effects co-occurred, the average number of perceptual effects per dichoptic trial across all participants was significantly greater than 1 (mean = 2, median = 1.82, t (30) = 6.72, p < 0.001), and this amount increased with increasing ICR (e.g., the mean and median were 2.54 and 2.42 for an ICR of 4, t (30) = 6.36, p < 0.001).Similar to Experiment 1, the probability that participants reported any perceptual effect increased as the stimulus ICR increased (Figure 10(a), Table 7, left).Luster was again the most commonly reported perceptual phenomenon associated with dichoptic imagery, and it increased the most with ICR.Across ICRs of 1, 2, and 4, there was a factor of 5.79 increase in the odds of luster for each unit increase in ICR, as compared to a 2.91 increase in rivalry, 3.62 in depth effects, 2.74 in contrast differences, and 2.91 in brightness differences.
Importantly, there were notable differences in the responses associated with AR targets that had high ICR and AR targets that were fully monocular (Table 7, right).The probability of all effects except for luster substantially increased for the monocular target compared with the non-monocular high ICR target.The odds ratios were 1.87 for contrast, 1.31 for brightness, 5.21 for rivalry, and 4.20 for depth.Qualitatively, the probability of reporting luster was lower for the monocular target, but this difference did not reach statistical significance.5 and 6.
The association of each stimulus type with each perceptual difference is shown in Figure 10(b).Overall, there was no strong qualitative stimulus-dependent pattern.Unlike in Experiment 1, the grating stimulus was not associated with a unique pattern that deviated from the other stimuli when presented as a semi-transparent stimulus over a natural background.All stimulus types had binocular luster and depth differences as the predominant reported effects.
Predicting Perceptual Artifacts in Dichoptic AR Stimuli
Our results highlight the importance of understanding the multi-faceted nature of dichoptic percepts, particularly with visual stimuli that closely match genuine AR experiences.For example, with the simple stimuli used in Experiment 1, participants did not consistently report any of the dichoptic perceptual effects more than 50% of the time on average.But when we switched to AR-like stimuli in Experiment 2, we observed high rates of luster (about 75% on average in extreme dichoptic cases), along with a notable increase in rivalry for monocular AR-like stimuli.
We conducted a post hoc analysis using t-tests to compare the distributions of each of the perceptual effects in the two experiments.For this analysis, we focused on the highest interocular contrast ratio (ICR = 4) and the monocular conditions.We used an initial significance threshold of p < 0.05 and did not correct for multiple comparisons to avoid being overly conservative (a Bonferroni corrected p-value threshold for these comparisons would be p < 0.005).For the ICR = 4 condition, we found that brightness differences were significantly more prevalent in Experiment 2 as compared to Experiment 1 (t (61) = 3.57, p < 0.001, Cohen's d = 0.90).For the monocular condition, all effects except luster were more prevalent in Experiment 2: contrast (t (61) = 3.50, p < 0.001, Cohen's d = 0.87), brightness (t (61) = 4.04, p < 0.001, Cohen's d = 1.01), rivalry (t (61) = 2.54, p = 0.01, Cohen's d = 0.63), and depth (t (61) = 3.12, p = 0.003, Cohen's d = 0.78).We speculate that these differences may derive from a combination of the different spatial patterns of the AR stimuli used in the Experiment 2 and the more conventional stimuli used in Experiment 1 [52], as well as the fact that the AR stimuli had background content (natural foliage) that was partially visible behind the icons.However, the perceptual mechanisms that would modulate these effects in a stimulus-dependent manner are poorly understood.
Due to the differences between the two experiments, we propose that perceptually motivated guidelines for acceptable levels of ICR between two AR displays should be more conservative than might be assumed based on simpler stimuli.We can use the results from Experiment 2 to provide preliminary design guidelines for AR applications.As an example, we consider a case in which we want to adopt a strict threshold on the probability that a dichoptic stimulus contains any perceptual effects that deviate from a comparison non-dichoptic stimulus.Collapsing across all of the stimulus types (i.e., removing them as model parameters) and refitting the trial-by-trial data for the exact match question with a logistic regression model, we come to the following equation: where P is the designer-selected minimum proportion of trials on which the dichoptic stimulus matches a non-dichoptic one (i.e., no perceptual effect), and ICR max is the maximum acceptable ICR.For example, if the designer aims for a threshold of P = 0.8 (a perceptual match 80% of the time), they should aim for an ICR of no more than 1.7.However, this result reflects the data on average, and given the large individual variation in our data a more strict threshold may be appropriate to accommodate users who are more sensitive to dichoptic perceptual effects.For example, a 90% threshold would be associated with an ICR of 1.28 or less.Currently, there is no publicly available dataset characterizing the typical binocular differences in contrast due to display defects and inefficiencies in AR systems.Prior work on optical eyebox limitations, however, suggests that interocular contrast differences in AR systems can span a broad range depending on the fit on the user's face and the movement of the device [4,8,43].For example, systems with small eyeboxes may be particularly susceptible to large ICRs if one of the user's pupils moves to the edge of the eyebox.On the other hand, because our data show that observers are relatively tolerant to global differences in contrast between the two eyes, our results suggest a potential opportunity to reduce system power requirements by selectively attenuating the brightness of one display.
Modeling Interocular Differences and their Effect on Perception
The best metric for quantifying interocular differences in AR systems remains an open area of research.Here, we used the ratio between the overall contrast in each eye as the summary measure of interocular difference.However, there may be other metrics that could be more informative and practical.In particular, in Experiment 2 the stimuli differed in more than just contrast, so this metric is incomplete.Ideally, perceptual metrics of interocular differences should account for both luminance and contrast, and even color.For example, the luminance adjustment applied to the colored AR icons in Experiment 2 could also result in interocular color differences, especially when viewing monocular AR on a binocular background (e.g., a red monocular target against a green binocular forest background), which is known to elicit perceptual effects such as luster as well [33].
As AR technologies advance, the nature of the image quality problems and artifacts posed by these technologies will continue to change.Building better image-computable models of binocular combination will be crucial because these models can be used to develop metrics that account for arbitrary differences between the two eyes.However, the formulation of flexible models for perceived contrast in complex imagery, let alone binocular contrast perception, remains an ongoing area of research [18,34,37].In our previous work, we explored how a Bayesian ideal observer model, which assumed binocular percepts are determined through a statistically optimal combination of binocular visual input and prior assumptions about the structure of the natural world, could explain specific properties of binocular depth perception [49].However, this model did not account for any other properties of binocular appearance, like contrast, luster, and rivalry.Given the strong ability of Bayesian models-and related probabilistic cue combination models-to fit and predict perceptual phenomena [25], such approaches may be a fruitful way to formulate binocular perception more broadly.For example, it may be possible to predict the probability of perceived luster based on statistical regularities in the binocular differences created by lustrous metallic surfaces.
Generalizable models of binocular perception also have great appeal for developing tonemapping methods intended to improve image quality through binocular combination.For example, a recent line of research has looked at developing tonemapping methods that intentionally display different luminance and contrast information to the two eyes in order to improve the overall visual quality of a stereoscopic display system that cannot reproduce the full dynamic range of luminance found in the natural environment.Yet, at present the results of these approaches are mixed [51,55,[58][59][60].In our experiments, we found that people did often report contrast and brightness differences between the best match and the dichoptic reference stimulus.Furthermore, looking at the four response types, participants tended to select the dichoptic reference stimulus to be higher contrast and brighter in Experiment 2, suggesting the potential for dichoptic imagery to boost subjective image quality (the reference and the adjustable stimuli were about equally selected in Experiment 1).The current results do not reveal why participants may have experienced this "dichoptic boost" in Experiment 2, so this question represents a promising area for future research.
Potential Benefits of Dichoptic Contrast in AR
Here, we focused primarily on the potential negative consequences of interocular differences in display brightness/contrast for users of AR systems.However, some of the perceptual phenomena we characterized may be desirable.For example, the appearance that a dichoptic stimulus is closer in depth might be helpful for heads-up AR systems that display icons floating in front of the environment.However, we found that this depth effect generally co-exists with other phenomena and may be challenging to isolate.For example, as the likelihood of the depth effect increased, binocular rivalry also increased.Therefore, we did not find a "sweet spot" of interocular contrast differences where desired effects may dominate undesired ones.However, some effects were more readily detected at a lower interocular difference than others.In both experiments, binocular luster was more detectable than the other effects, and the good news is that rivalry remained relatively uncommon in comparison.This may be beneficial if designers want to leverage binocular luster to create a shiny metallic appearance for a virtual object without rivalry effects.The exception to this observation was during monocular viewing, particularly in the AR-like situation simulated in Experiment 2. In this experiment, observers detected rivalry about half of the time during monocular trials, suggesting that even if a binocular display has large interocular differences, it may be preferable to a monocular system if rivalry is a concern.Lastly, we performed exploratory analyses to see whether some perceptual effects might be minimized if the higher contrast image in a binocular display was shown to the user's dominant eye.However, we did not find compelling evidence for eye dominance effects in the current dataset.
CONCLUSION
Binocular displays can introduce unwanted visual differences between the left and right eye's views.Here, we focused on the perceptual consequences of contrast differences for optical see-through AR systems in particular, but such interocular differences can occur in any binocular display system.Across two experiments, our results suggest that the binocular appearance of dichoptic imagery is multi-faceted, and the magnitude of the interocular difference between the two eyes is a main predictor for the intrusion of potentially detrimental perceptual effects such as luster and rivalry.Our study results provide an overview of supra-threshold perceptual effects, but understanding detection thresholds for these effects will provide valuable and complementary information for display design.As we continue to improve our understanding of the perceptual phenomena associated with binocular differences in AR devices, a careful consideration of both the scope and strength of these phenomena can help guide design choices that support a high-quality user experience.
Fig. 2 .
Fig.2.Illustrations of the different binocular perceptual phenomena that can result from interocular luminance and contrast differences.Readers are encouraged to cross-fuse the left and right eye images to observe the effects since the artistic depiction is not exact.(a) Two pairs of stimuli with dichoptic luminance increments.The top row shows the winner-takeall brightness perception phenomenon.The bottom row, with a monocular contour in the eye seeing the lower luminance disk, illustrates the resulting bias toward that eye (in this case, loser-take-all).(b) Dichoptic contrast perception of more complex patterns, for example if the contrast of a grating pattern differs between the two eyes, is often dominated by the eye seeing higher contrast.(c) Binocular luster percepts can be elicited by dichoptic luminance stimuli.(d) Binocular rivalry can be elicited by pairs of images with different visual patterns in the two eyes.(e) Lastly, imagery that is anti-correlated and without binocular disparity between the two eyes can result in anomalous depth percepts.In this example, the binocular percept of the middle region (highlighted by the gray square) tends to be that it is at a different depth than the surrounding area.Luster, rivalry, and anomalous depth may also be visible when fusing panels (a) and (b).
Fig. 3 .
Fig. 3. (a) Experimental setup, in which two stimuli were shown to participants in a mirror haploscope.At the start of each trial, participants were asked to match the appearance of the two stimulus targets (e.g., the circular pattern) as best they could by varying the adjustable stimulus (natural image credit: © October 2016 SYNS Dataset [1]).(b) Following the matching task, a set of follow-up questions and response options (boxed) were shown on the screen for the participants to select based on what they saw during the matching phase.
Fig. 4 .
Fig. 4. (a) Five stimulus target patterns used in Experiment 1. (b)An example of two types of dichoptic references that were used: non-monocular reference stimuli had different contrast for each eye's target and both eyes' target contrasts were greater than 0, whereas monocular reference stimuli only had a target visible in one eye.Recall that all targets were embedded in a square surround region that matched the average contrast in the reference targets, and had the same type of spatial pattern as the targets.
Fig. 5 .
Fig. 5. (a) Four icon stimulus patterns used in Experiment 2. (b)We simulated the appearance of an AR target on a natural background by compositing each icon[48] with a forest scene (© October 2016 SYNS Dataset[1]).(c) The AR target in the reference stimulus could be non-dichoptic, dichoptic but non-monocular, or fully monocular.
Fig. 6 .
Fig. 6. Results for the exact match question in Experiment 1.Large black dots represent the average probability of finding an exact perceptual match across participants.Error bars are 95% confidence intervals.The smaller gray dots represent each participant's data.(a) The probability of exact matches across all interocular contrast ratios (ICR) in the two eyes, including monocular targets.(b) The probability of exact matches for each stimulus pattern with all ICRs included, including monocular trials.
Fig. 7 .
Fig. 7. Results for the five perceptual effects measured in Experiment 1.(a) The average proportion of trials across participants (with 95% confidence interval) in which each of the effects was present as a function of interocular contrast ratio (ICR), and for monocular targets.(b) Heatmap showing the average proportion of time that each effect (x axis) was present for each stimulus type (y axis) across all dichoptic trials (ICR = 2, 4, monocular).
Fig. 8 .
Fig. 8. (a) Schematic of our simple weighted combination model used to quantify binocular contrast perception for the contrast matching results in Experiment 2. Image credits: icon [48], background © October 2016 SYNS Dataset [1].(b) The matching result is expressed as the weight for the high-contrast eye (w H ) across different stimulus types (top) and different dichoptic conditions (bottom).The large dots represent the average weights across all participants, and the small dots represent each participant's fitted weight.The 95% confidence interval for each average is either smaller than or approximately the same size as the circular marker.
Fig. 9 .
Fig. 9. Results for the exact match question in Experiment 2. Large black dots represent the average probability of finding an exact perceptual match across subjects.Error bars are 95% confidence intervals.The smaller gray dots represent each participant's data.(a) The probability of exact matches across different interocular contrast ratios (ICR) in the two eyes, and for monocular targets.(b) The probability of exact matches for each stimulus pattern with all ICRs included, including monocular trials.
Fig. 10 .
Fig. 10.Results for the five perceptual effects measured in Experiment 2. (a) The average proportion of trials across participants (with 95% confidence interval) in which each of the effects was present as a function of interocular contrast ratio (ICR), and for monocular targets.(b) Heatmap showing the average proportion of time that each effect (x axis) was present for each stimulus type (y axis) across all dichoptic trials (ICR = 2, 4, monocular).
Table 2 .
Logistic Regression Model for Experiment 1 Comparing Dichoptic Trials with Non-monocular Targets (ICR = 4) to Trials with Monocular Targets The non-monocular trials were used as the baseline.Data are reported in the same format as Table1(degrees of freedom = 1,278).
Table 3 .
Logistic Regression Models for the Occurrence of Perceptual Effects in Experiment 1 Left: Each effect's occurrence as a function of ICR (ICRs of 1, 2,
Table 4 .
Pairwise t-Test Results for Difference in Weights Across Different Stimulus Patterns and Different ICR Levels in Experiment 2 The test statistics (degrees of freedom = 30), Bonferroni corrected p values, and effect sizes (Cohen's d) are reported.
Table 6 .
Logistic Regression Model for Comparing Dichoptic Trials with Non-monocular Targets (ICR = 4) to Trials with Monocular Targets (Degrees of Freedom = 1,982)
Table 7 .
Logistic Regression Models for the Occurrence of Perceptual Effects in Experiment 2
|
v3-fos-license
|
2019-04-20T13:03:22.503Z
|
2019-04-18T00:00:00.000
|
122542165
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-019-3885-7",
"pdf_hash": "d3b3a6a1f85187fb7cd9062375f76dcf8d0f55f9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46549",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "d3b3a6a1f85187fb7cd9062375f76dcf8d0f55f9",
"year": 2019
}
|
pes2o/s2orc
|
Sero-epidemiological status and risk factors of toxoplasmosis in pregnant women in Northern Vietnam
Background In Vietnam, few studies have determined the epidemiological status of toxoplasmosis in pregnant women and no routine prenatal screening is in place. This study was conducted to evaluate the seroprevalence of this zoonotic parasitic infection in pregnant women in Northern Vietnam and to assess the association with awareness, risk factors and congenital toxoplasmosis. Methods Approximately 800 pregnant women were included in the study from two hospitals, one in Hanoi and one in Thai Binh province, which is known to have a dense cat population. Serological immunoglobulin G (IgG) and immunoglobulin M (IgM) detection was performed to estimate the seroprevalence of toxoplasmosis and sero-incidence of maternal and congenital toxoplasmosis. In addition, a survey was conducted about awareness, clinical history, presentation of signs and symptoms relating to toxoplasmosis and to detect biologically plausible and socio-demographic risk factors associated with toxoplasmosis. Associations with seroprevalence were assessed using univariable and multivariable analysis. Results The mean IgG seroprevalence after the full diagnostic process was 4.5% (95% confidence interval(CI): 2.7–7.0) and 5.8% (95% CI: 3.7–8.6) in Hanoi and Thai Binh hospital, respectively, and included one seroconversion diagnosed in Thai Binh hospital. Only 2.0% of the pregnant women in Hanoi hospital and 3.3% in Thai Binh hospital had heard about toxoplasmosis before this study. Conclusion Since the percentage of seronegative, and thus susceptible, pregnant women was high and the awareness was low, we suggest to distribute information about toxoplasmosis and its prevention among women of child bearing age. Furthermore, future studies are recommended to investigate why such a low seroprevalence was seen in pregnant women in Northern Vietnam compared to other countries in South East Asia and globally. Electronic supplementary material The online version of this article (10.1186/s12879-019-3885-7) contains supplementary material, which is available to authorized users.
Acquired toxoplasmosis is usually asymptomatic or results in a relatively mild acute illness in immunocompetent individuals, with some cases suffering from acquired chorioretinitis (also referred to as retinochoroiditis) and/ or fatigue, yet it can cause serious disease in immunocompromised patients [8]. There is increasing evidence that chronic toxoplasmosis may also result in a number of psychiatric or neurological diseases even in immunocompetent individuals [9,10].
Congenital toxoplasmosis (CT) is caused by transplacental transmission of tachyzoites to the unborn child. Women who are seropositive have negligible risk for CT because it mainly occurs when a woman is primarily infected with T. gondii during pregnancy. Congenital toxoplasmosis is asymptomatic in most cases but it can also result in congenital defects, such as hydrocephalus, central nervous system abnormalities, chorioretinitis and even fetal or neonatal death [11][12][13]. The estimated global incidence of CT is 1.5 (95% confidence interval (CI): 1.4-1.6) cases per 1000 live births, which resulted in an estimated public health impact of 9.6 (95% CI: 5.8-15) Disability-Adjusted Life Years (DALYs) per 1000 live births [13]. Since simple primary prevention measures can reduce the risk of infection during pregnancy in seronegative women, it can be important to know her serological status at the beginning of pregnancy [14]. For these measures to be successful in the local context it is important to know the major risk factors associated with the infection.
Only few studies have determined the epidemiological status of toxoplasmosis in pregnant women and no studies have assessed the risk factors associated with toxoplasmosis in Vietnam, a densely populated country in southeast Asia with a population of around 93.5 million in 2015 [15]. As far as we know, no systematic prenatal screening nor prevention measures for toxoplasmosis are in place and the awareness is low in this country. Therefore, this study aimed to evaluate the seroprevalence of toxoplasmosis in pregnant women in hospitals in Hanoi and Thai Binh, Northern Vietnam, and to assess the association with awareness, risk factors and CT.
Study design and setting
The two study sites were the National Hospital of Obstetrics and Gynaecology in Vietnam's capital Hanoi, one of the leading hospitals in Vietnam for obstetrics and gynaecology, and the Hospital of Obstetrics and Gynaecology in Thai Binh province (120 km south east of Hanoi), in the remainder of this paper referred to as Hanoi hospital and Thai Binh hospital, respectively. Both hospitals are public hospitals, accessible for women of all layers of the society. Hanoi and Thai Binh province have an approximate population size of 7.2 million and 1.8 million, respectively [15]. Thai Binh is known as the "cat province" because of its dense cat population -pets, stray cats, and cats for meat consumption -in contrast to the rest of Vietnam [16]. More background information, eligibility criteria and the full study protocol are described in Smit et al. [17].
From October 2016 to March 2017 the gynaecologists from both hospitals identified approximately 800 eligible pregnant women attending antenatal care for the first time within their current pregnancy. At first consult, participants were asked to fill in a structured questionnaire concerning clinical history, awareness, presentation of signs and symptoms related to toxoplasmosis, and potential socio-demographic and biologically plausible risk factors (Additional files 1 and 2). Awareness was raised on the importance of toxoplasmosis and its possible consequences on pregnancy and a leaflet (in Vietnamese) was handed out explaining (congenital) toxoplasmosis and its prevention for further read-up and reminder at home. In addition, 5 ml blood were collected from participating women by the medical technicians in the hospital for further serological analysis.
Women who tested positive for IgG only were considered seropositive. Women who were IgM positive were tested again for IgG and IgM 3-4 weeks later. When positive for IgG and IgM in the first test an IgG avidity test (Roche Diagnostics; on the first serum sample; performed in the National Hospital of Obstetrics and Gynaecology in Hanoi) was done to determine the time of seroconversion. An IgG negative and IgM positive test result in the first test and an IgG positive test result 3-4 or 6-8 weeks later was considered indicative for seroconversion. At any suspicion of seroconversion, women were advised and followed-up (including an ultrasound every 4 weeks) by their treating gynaecologist.
Children were thoroughly investigated and followed-up for any signs of CT by a neonatologist/paediatrician of the parents' choice when there was an indication of seroconversion during pregnancy. For serology, blood samples were collected and tested for toxoplasmosis specific IgG, IgM, and IgA (performed in Belgium; Platelia™ Toxo IgA, Bio-Rad). The presence of IgM and IgA in neonatal serum is diagnostic for CT but the sensitivity is low and decreases when the infection occurred early during pregnancy [21,22]. Persisting IgG antibodies 9 months to 1 year after birth are an indication for CT but only if the mother is also found IgG positive in the perinatal period [21]. More information about the laboratory procedures and the full study protocol including (diagnostic) follow-up are described in Smit et al. [17].
Statistical analysis
Data from the source documents were entered in Microsoft Excel 2011 (Microsoft, Redmond, United States of America). The populations were divided in seronegative (susceptible) and seropositive (infected, i.e. seroconversion, or recovered, i.e. IgG positive only) individuals, the latter with (at least IgG) humoral immunity exceeding the threshold of the fixed diagnostic cut-off values provided by the manufacturers of the assays, implying (past) infection. Hence, the immunological status of the individual follows a Bernoulli distribution and the mean seroprevalence of toxoplasmosis in pregnant women in Hanoi and Thai Binh hospital could be determined.
The questionnaire was analysed to detect biologically plausible and socio-demographic risk factors associated with toxoplasmosis, clinical history, awareness and presentation of signs relating to toxoplasmosis. Association between the seroprevalence of T. gondii infection and possible demographic and risk factors were explored using univariable and multivariable analysis. A generalized linear model with a binomial distribution and logit link function was applied. For categorical variables, the Pearson's chi-square test was used as a goodness of fit to proof that the observed data did not differ from the theoretical distribution. When any cell value had < 5 observations and/or a separation problem occurred for a variable, a logistic regression model using Firth's bias reduction method and the Fisher's exact test were used [23]. Variables with a P value under the threshold P ≤ 0.20 were analysed in a multivariable model. The multivariable model was constructed using a logistic regression model using Firth's bias reduction method, backwards selection and based on a significance level for inclusion of P ≤ 0.05.
The Clopper-Pearson method was used to estimate the binomial confidence interval and as such to summarize the statistical uncertainty about the seroprevalence by the mean and 95% CI. All calculations were performed in R 3.5.0 (R Core Team 2018) [24].
Results
In total, 402 eligible pregnant women were recruited in Hanoi hospital and 397 in Thai Binh hospital. Every participant was informed on prevention measures, the diagnostic test results and was offered appropriate medical information and medical follow-up if required. We found 17 women in Hanoi and 21 in Thai Binh hospital seropositive for toxoplasmosis IgG only at first visit. In Hanoi hospital four women were followed-up based on a positive IgM and negative IgG result (n = 3) or both IgM and IgG positive result (n = 1). In Thai Binh hospital seven women had a positive IgM result only (n = 6) or both IgM and IgG positive result (n = 1). Three women started treatment with spiramycine within the medical follow-up, of which one woman continued this treatment until delivery based on suspicion of a seroconversion. The two samples that showed IgG and IgM positivity showed a high avidity, which suggested an old infection, and were subsequently considered seropositive. Within the follow-up a false positive IgM result was concluded for two samples in Hanoi and two samples in Thai Binh, and were subsequently considered seronegative. In these IgM false positive cases, the gestational age at the first blood sample ranged between 9 and 13 weeks. After the first tests with IgG negative and IgM positive results, a second blood sample was taken 4-5 weeks later, which tested IgG negative and IgM negative. In all but one case, these IgM negative results were confirmed by an IgM ELISA at the National Hospital of Obstetrics and Gynaecology Hanoi.
Four women did not want to be followed-up and dropped out before a final conclusion could be made on their serological status. However, we tried to remain in contact and provided information and diagnostic testing when requested. All pregnancies within the follow-up were without abnormalities and the newborns were considered healthy. One newborn, from the mother who had been under treatment during pregnancy, was at the time of writing followed-up based on suspicion of congenital infection. The serum samples from the first 3 months of this newborn were both IgM and IgA negative and showed a decreasing IgG titer, which might mean that no congenital infection had taken place. However, the final diagnosis can only be made at 1 year after birth. The patient received proper consultation and medical follow-up by a pediatrician.
Taking into account all diagnostic results of the women who remained in the study (n = 401 in Hanoi hospital and n = 394 in Thai Binh hospital), the mean seroprevalence was 4.5% (95% CI: 2.7-7.0) and 5.8% (95% CI: 3.7-8.6) in Hanoi and Thai Binh hospital, respectively. The mean age of these women was 27 years (standard deviation (sd): 5) in Hanoi and 28 years (sd: 5) in Thai Binh hospital.
Information regarding seroprevalence, age, clinical history, presentation of signs and symptoms, awareness and the presence of cats from the questionnaire is summarized in Table 1. The questionnaire was analysed to detect socio-demographic and biologically plausible risk factors associated with toxoplasmosis. The complete results of the univariable analyses are presented in Additional file 3, while the significant associations of the univariable and multivariable models are summarized in Table 2 and Table 3 for Hanoi and Thai Binh hospital, respectively.
The data showed that, with every increase in gestational weeks, women in Hanoi hospital had 2.29 (95% CI: 1.15-4.59) higher odds to test seropositive. Being employed by the government showed 3.11 (95% CI: 1.14-8.49) higher odds, household tasks related contact with soil, sand, floor, pavement or street was associated with 2.65 (95% CI: 1.00-7.01) higher odds, and a negative association with toxoplasmosis seroprevalence was observed when chicken or duck was consumed (0.191 (95% CI: 0.056-0.648)) . In Thai Binh hospital, pregnant women with the profession "street cleaning" showed 18.0 (95% CI: 1.09-299) higher odds to test seropositive and 0.033 (95% CI: 0.003-0.359) lower odds when they consumed pork. In both hospital populations we did not find an association between the seroprevalence and people owning a cat or having (stray) cats on the property/ neighbourhood/ work environment. Among cat owners there were no significant variables observed neither in the univariable nor in the multivariable model.
Discussion
Since a noticeable impact of primary prevention on the burden of CT was observed by Smit et al. [25], we estimated the sero-epidemiological status and risk factors of toxoplasmosis in pregnant women in Northern Vietnam, a region with an assumed low level of awareness and lack of prevention measures. The mean estimated seroprevalence of 4.5% (95% CI: 2.7-7.0) and 5.8% (95% CI: 3.7-8.6) in Hanoi and Thai Binh hospital, respectively, were surprisingly low. With alimentary habits of eating raw/medium rare meat and raw vegetables and the presence of cats, we would expect the seroprevalence in pregnant women in Vietnam to be similar to for example the European seroprevalence, within the approximate range of 10-50% in pregnant women [26]. Studies conducted between 1959 and 2003 in Vietnam showed an overall low, yet higher toxoplasmosis seroprevalence compared to our study, with 11% in pregnant women, and ranging from 7.7 to 29% in the general population [27,28]. Yet, a similar seroprevalence of 4.2% was found in a sero-survey in 2006 on toxoplasmosis in rural areas of the northern provinces, Nghe An, Lao Cai and Tien Giang [29]. In animals, the seroprevalence has been studied in pigs (27%) [30] and in cattle and water buffaloes (11 and 3.0%, respectively) [31] but, to our knowledge, not in other animals, such as cats. Even though large variability within and between countries in South East Asia was reported before, the seroprevalence found in the current study was low compared to other countries in this region and globally [26], especially considering the alimentary habits and presence of cats. Using standard commercial ELISA kits a seroprevalence of 43% (95% CI: 36-49) was observed in Malaysian pregnant women, 31% (95% CI: 28-37) in Myanmarese pregnant women [32] and 25% (95% CI: 22-28) in pregnant women in Southern Thailand [33]. Examples of studies that showed similar low seroprevalence in pregnant women in this region were conducted in Thailand, Bangkok (ELISA: 5.3% (95% CI: 3.8-6.8)) [34] and China, Nanning, Guangxi (indirect hemagglutination test: 7.0%). In China, one of the lowest seroprevalence estimates worldwide were reported, even below 1% in some south-western provinces [35].
The low seroprevalence means that the majority of pregnant women in Northern Vietnam were seronegative, and thus susceptible, which might make dissemination of information about primary prevention important, especially since very few pregnant women have heard about toxoplasmosis and how it can be acquired (only 2% in Hanoi hospital and 3.3% in Thai Binh hospital). However, low seroprevalence might also imply a low risk of infection for pregnant women. There may be a trade-off between seroprevalence, force of infection, and average age of pregnancy. To accurately model this, a larger sample size would be required. Either way, since dissemination of information about toxoplasmosis and its prevention is relatively easy and cheap, we would suggest distributing this among women of childbearing age. Although we found a (non-significantly) higher toxoplasmosis seroprevalence in Thai Binh hospital, we could not conclude that the seroprevalence in both survey sites was associated to people owning a cat or to having (stray) cats on the property/ in the neighbourhood/ work environment. Pappas et al. [26] and Petersen et al. [1] already noticed that a surprisingly absent risk factor in most studies was contact with cats. Direct contact with T. gondii shedding cats might not result in toxoplasmosis since oocysts passed in their faeces are unsporulated and, thus, not immediately infective. However, after sporulation in the environment they are a source of infection [1,4].
A clear limitation was that the logistic regression necessarily needs sufficient limiting sample size (in our study the number of seropositives). Peduzzi et al. [36,37] suggested that logistic models produce reasonably stable estimates if the limiting sample size has approximately 10 to 15 events per predictor. In the majority of the variables this was not met, so caution is needed for statistical inference. This may also explain the few and unexpected significant associations (e.g. negative associations with the consumption of chicken or duck in Hanoi and pork consumption in Thai Binh). In addition, the significantly associated binary variables in the multivariable models, and many of the binary variables analysed in the univariable models, had a small number of observations, or in contrast a very large number of observations, which made the variables unbalanced. For example, in Thai Binh there were 2/394 pregnant women with a street cleaning profession, of which one was seropositive, and 3/394 answered they never ate pork, of which two were seropositive. Finally we cannot fully rule out confounders.
Extrapolation of the results for pregnant women in Hanoi and Thai Binh and by extension for Northern Vietnam might induce selection bias. However, since these hospitals are accessible for women of all layers of the society and women in Vietnam, especially in urban areas, tend to go the gynaecologist from the moment they suspect to be pregnant and go for follow up consultation and ultrasound every month until delivery, these two hospital populations might Fisher's exact test; b Obtained using Firth's bias reduction method [20] Abbreviations: CI confidence interval, sd standard deviation be considered representative. In addition, it is unlikely that an overrepresentation of women with potential complications occurred in the study, since we only included pregnant women attending antenatal care for the first time within their current pregnancy. Our study might have included some information bias due to the diagnostic test performances. In case of a seroconversion the ISAGA ensures very early detection of IgM yet the test is very sensitive to residual IgM, which can persist for more than 1 year (according to the manufacturer). By retesting and following up the patients, conducting confirmatory tests, complementing the results with additional diagnostic techniques, interpretation of the results taking into account the results of all (other) tests performed and the patient's history, and thorough consultation and discussion with all stakeholders involved, our protocol [17] took this into account.
Conclusion
The mean estimated toxoplasmosis seroprevalence in pregnant women in Hanoi and Thai Binh hospital was surprisingly low. Since the percentage of seronegative, and thus susceptible, pregnant women was high and the awareness was low, we suggest to increase awareness and distribute information about toxoplasmosis and its prevention among pregnant women at first consult and preferably even before pregnancy to reduce the prevalence and risk of transmission of this zoonosis. It would be interesting to investigate why such a low seroprevalence was seen in pregnant women in Northern Vietnam compared to other countries in South East Asia and globally. Further research could include investigation of the T. gondii prevalence in cats and livestock, investigation of the T. gondii strains involved, and the susceptibility of humans and/or warm-blooded animals in this region. The funding bodies have no role in the design of the study, and the collection, analysis and interpretation of data and in writing the manuscript.
Availability of data and materials
The data that support the findings of this study are available on reasonable request from the corresponding author GSAS. The data are not publicly available due to them containing information that could compromise research participant privacy/consent.
Authors' contributions PD, ER, BTLV and GSAS designed and coordinated the study. All authors made substantial contributions to the development of the study. QHD and HQP conducted and coordinated the work in the hospitals. BTLV and laboratory assistants coordinated the contact with the hospitals and conducted the laboratory analysis and database construction. EP and the technical staff from the Laboratory of Infectious Serology of Ghent University Hospital conducted the laboratory analysis of samples sent to Belgium. GSAS and BD analysed the data. PD, EP, ER, BD, NS and DTD provided technical expertise and advice. GSAS, BTLV, EP, ER, BD, and PD were major contributors in writing the manuscript. All authors have been involved in drafting and revising the manuscript and approved the final manuscript and agree with its submission to BMC Infectious Diseases. This manuscript has not been published elsewhere and is not under consideration by another journal.
Ethics approval and consent to participate
This study is approved by the Institutional Review Board of the Institute of Tropical Medicine (ITM) and the Ethics Committee of the University Hospital in Antwerp, Belgium and the initial study description is approved by the Ethical Committee of the National Institute of Malariology, Parasitology and Entomology and the Vietnamese Ministry of Health. The study was carried out according to the principles stated in the Declaration of Helsinki, all applicable national regulations and according to established international scientific standards. All participants were willing and able to provide written informed consent by signature; in case the person was illiterate informed consent was given by thumbprint and a signature of an impartial witness.
Consent for publication
Not applicable
|
v3-fos-license
|
2023-02-25T16:20:34.743Z
|
2023-02-23T00:00:00.000
|
257179141
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/24/5/4392/pdf?version=1677134877",
"pdf_hash": "f5400dc1e2c7931ce9f7bf78473301b9c484ae02",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46552",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "6200f091860dc73917253f110c9a88fb5a825a98",
"year": 2023
}
|
pes2o/s2orc
|
Comparisons between Plant and Animal Stem Cells Regarding Regeneration Potential and Application
Regeneration refers to the process by which organisms repair and replace lost tissues and organs. Regeneration is widespread in plants and animals; however, the regeneration capabilities of different species vary greatly. Stem cells form the basis for animal and plant regeneration. The essential developmental processes of animals and plants involve totipotent stem cells (fertilized eggs), which develop into pluripotent stem cells and unipotent stem cells. Stem cells and their metabolites are widely used in agriculture, animal husbandry, environmental protection, and regenerative medicine. In this review, we discuss the similarities and differences in animal and plant tissue regeneration, as well as the signaling pathways and key genes involved in the regulation of regeneration, to provide ideas for practical applications in agriculture and human organ regeneration and to expand the application of regeneration technology in the future.
Introduction
Animals and plants are subjected to a variety of stimuli during their life span that can cause tissue damage. Both animals and plants promote tissue regeneration through adult stem cells or by the induction of stem cell differentiation to maintain their lives [1]. Tissue regeneration refers to the continuous renewal of biological tissues, the re-differentiation of existing adult tissues to produce new organs, or the repair process after tissue damage. It is one of the phenomena of biological life [2,3]. As plants are sessile, they face various challenges in the external environment. Both lower and higher plants have dramatic regenerative capacities. The super-regenerative capacity of plants is important for maintaining their survival [4]. The regenerative capacity of animals is species-specific. For example, planarians can regenerate whole bodies from tissue fragments of almost any part of the body [5][6][7]. Amphibians such as salamanders can also completely regenerate lost organs and limbs, such as the legs, gills, tail, retina, spinal cord, and heart [8,9]. Although the zebrafish is a vertebrate, it has dramatic regenerative capacity and is, therefore, often used as a model of organ regeneration. Zebrafish can regenerate their hearts, livers, spinal cords, and caudal fins [10][11][12]. Humans, however, can only regenerate intestinal cells, skin, and bones, either continuously or periodically [13,14].
Regeneration of animals and plants is dependent upon stem cells. Stem cells undergo differentiation and division to form the tissues or organs required by animals and plants. Plant stem cells mainly exist in the meristem, upon which the formation of plant organs is reliant [15][16][17]. The existence of meristems ensures plasticity in the growth and development of plants [18]. Plant regeneration is mainly regulated by auxin and cytokinin signaling [19]. In animals, Wnt/β-catenin, Hedgehog (Hh), Hippo, Notch, Bone Morphogenetic Protein (BMP), Transforming growth factor-beta (TGF-β), and other signaling pathways regulate animal tissue regeneration [20,21]. Interestingly, the target of rapamycin (TOR) plays an important regulatory role in both animal and plant regeneration. In plants, TOR is involved in the regulation of roots, stem growth, and callus formation [22][23][24], and in animals, TOR is a central hub for integrating nutrients, energy, hormones, and environmental signals [25,26]. Cell growth and cell cycle progression are generally tightly connected, allowing cells to proliferate continuously while maintaining their size. TOR is an evolutionarily conserved kinase that regulates both cell growth and cell cycle progression coordinately [27]. Stem cells and their metabolites have great application value in agriculture and regenerative medicine. The advancement in regenerative medicine benefits human health, and it has great prospects in the medical field [28]. Stem cells can be regarded as ideal seed cells for genetic engineering, able to the repair damaged tissues and organs and to overcome immune rejection. In this review, we discuss the regeneration mechanisms of animals and plants, highlighting the similarities and differences between these biological processes. Additionally, we summarize the main recent findings on animal and plant stem cells in the field of regeneration, and provide new ideas and directions for the protection of endangered species and the development of regenerative medicine.
Similarities and Differences in Plant and Animal Regeneration
Plants have the remarkable ability to drive cellular dedifferentiation and regeneration [19]. However, the regenerative capacity of animals varies greatly across different species. Invertebrates and amphibians generally have a high regenerative capacity [29]. In contrast, the regeneration capacity of vertebrates, such as mice, is relatively weak [30][31][32]. Whether the research subject is a planarian with strong regenerative capacity or a human with weak regeneration capacity, the fundamental mechanism of regeneration is the differentiation of stem cells into the damaged/missing tissues.
The regeneration processes of animals and plants have certain similarities. Firstly, they can be divided into the same levels of regeneration, including cell, tissue, structural, organ, and systemic regeneration [33]. Secondly, in both plants and animals, injury is the main stimulus for the formation of specialized wound tissue that initiates regeneration. A regenerative response from these organisms can be elicited by environmental insults, such as pathogens or even predatory attacks. Amputation in animals is usually, but not always, followed by the formation of a specialized structure known as a regeneration blastema. This structure consists of an outer epithelial layer that covers mesodermally derived cells, inducing a canonical epithelial/mesenchymal interaction, a conserved tissue relationship central to the development of complex structures in animals [34]. In plants, a frequent, but not universal, feature of regeneration is the formation of a callus, a mass of growing cells that has lost the differentiated characteristics of the tissue from which it arose. A callus is typically a disorganized growth, arising on wound stumps and in response to certain pathogens. One common mode of regeneration is the appearance of new meristems within callus tissue. Therefore, the plant callus and animal blastema share the characteristics of being specialized yet undifferentiated structures capable of regenerating new tissues [4]. Moreover, the process of stem cell regeneration induced by somatic cells in plants is similar to that induced by animal pluripotent stem cells. In animals, the production of induced pluripotent stem cells (iPSC) depends on the expression of many key transcription factors. Similar to animal cells, the induction and maintenance of stem cells in plants also depend on the induction and expression of several key transcription factors, such as class B-ARR, WUSCHEL (WUS), or WUSCHEL RELATED HOMEOBOX5 (WOX5). Therefore, the stem cells induced in plants that express the pluripotent genes such as WUS or WOX5 can also be called plant iPSC [35]. In addition, the regeneration of animals and plants requires the participation of stem cells.
The regenerative capacity of animals and plants varies greatly. Generally speaking, the regenerative capacity is weak in higher animals, and varies greatly between body parts ( Figure 1). The skin, as well as other microorgans and tissues of animals, have relatively fast renewal speeds and strong regeneration capacities [36]. The regeneration capacity of the heart, stomach, and other organs is weak, whereas that of the liver is relatively strong [37]. Unlike certain nerve tissues that still retain axonic connections, animal nerve cells have almost no regenerative capacity; therefore, certain types of brain cell damage and senile dementia are irreversible and can only be repaired via stem cell treatment [38]. The regenerative ability of plants is generally stronger than that of animals, but also vary greatly between species. For example, the regenerative capacities of Taxus chinensis, Metasequoia glyptostroboides, and Ginkgo biloba are relatively weak, whereas those of lower plants, such as Ficus virens, Laminaria japonica, and Undaria pinnatifida, are relatively strong [39]. expression of several key transcription factors, such as class B-ARR, WUSCHEL (WUS), or WUSCHEL RELATED HOMEOBOX5 (WOX5). Therefore, the stem cells induced in plants that express the pluripotent genes such as WUS or WOX5 can also be called plant iPSC [35]. In addition, the regeneration of animals and plants requires the participation of stem cells. The regenerative capacity of animals and plants varies greatly. Generally speaking, the regenerative capacity is weak in higher animals, and varies greatly between body parts (Figure 1). The skin, as well as other microorgans and tissues of animals, have relatively fast renewal speeds and strong regeneration capacities [36]. The regeneration capacity of the heart, stomach, and other organs is weak, whereas that of the liver is relatively strong [37]. Unlike certain nerve tissues that still retain axonic connections, animal nerve cells have almost no regenerative capacity; therefore, certain types of brain cell damage and senile dementia are irreversible and can only be repaired via stem cell treatment [38]. The regenerative ability of plants is generally stronger than that of animals, but also vary greatly between species. For example, the regenerative capacities of Taxus chinensis, Metasequoia glyptostroboides, and Ginkgo biloba are relatively weak, whereas those of lower plants, such as Ficus virens, Laminaria japonica, and Undaria pinnatifida, are relatively strong [39]. Stem cells are divided into totipotent stem cells, pluripotent stem cells, and unipotent stem cells [40]. The distribution of animal and plant stem cells is also quite different. In plants, stem cells existing in the shoot apical meristem (SAM) and root apical meristem (RAM) are pluripotent, and plant stem cells mainly exist in the meristem of plants for a long time [41]. Meristems can differentiate into vegetative tissues, protective tissues, conducting tissues, mechanical tissues, secretory tissues, and other plant cell populations with identical physiological functions and morphological structures to form vegetative and reproductive organs of plants [42,43]. In addition, plants can also produce calluses, which are similar to stem cells, and are the tissue formed by somatic cells in response to injury and dedifferentiation [19,44,45]. There is often a lack of stem cell aggregation in animal tissues; however, Stem cells are divided into totipotent stem cells, pluripotent stem cells, and unipotent stem cells [40]. The distribution of animal and plant stem cells is also quite different. In plants, stem cells existing in the shoot apical meristem (SAM) and root apical meristem (RAM) are pluripotent, and plant stem cells mainly exist in the meristem of plants for a long time [41]. Meristems can differentiate into vegetative tissues, protective tissues, conducting tissues, mechanical tissues, secretory tissues, and other plant cell populations with identical physiological functions and morphological structures to form vegetative and reproductive organs of plants [42,43]. In addition, plants can also produce calluses, which are similar to stem cells, and are the tissue formed by somatic cells in response to injury and dedifferentiation [19,44,45]. There is often a lack of stem cell aggregation in animal tissues; however, they are widely distributed in various tissues and organs, though in small numbers [46]. In addition, due to the differences in evolution, there are significant differences in the signal pathways and regulators regulating plant and animal regeneration (Tables 1 and 2). In plants, a feedback regulation pathway is formed between WUS and CLAVATA (CLV), which regulates the steady state of stem cells in stem tips [47]. The SHORTROOT (SHR)-SCARECROW (SCR) signaling pathway plays a key role in maintaining apical meristems [48,49]. In animals, the Wnt and Notch classical signaling pathways regulate self-renewal of hematopoietic, intestinal epithelial, skin, and neural stem cells [50]. Notum Wnt signaling promote Promotes the regeneration of aging tissues [73] Oct-3/4, Sox2, Klf4 and c-Myc (OSKM) unsure promote Short-term induction of OSKM in muscle fibers can promote tissue regeneration by changing the niche of stem cells [74] Early growth response (EGR) Jun N-terminal kinase (JNK) signaling promote Whole-body regeneration "switch" [75] Equinox
Molecular Mechanisms of Plant and Animal Regeneration
There are great differences in the regenerative capacities of animals and plants, and the involved signaling pathways are also different. Even in plants, the transcription factors and signal pathways regulating SAM and RAM regeneration vary [80]. SAM is formed in the early stage of embryonic development and is structurally divided into the central zone (CZ), rib zone (RZ), and peripheral zone (PZ). The CZ region is composed of pluripotent stem cells in an undifferentiated state, with a long cell division cycle; the RZ region provides cell support for the vascular meristem; and the PZ region is the core region for further cell division, differentiation, and development into lateral organs [81]. In SAM, STM and WUS are essential for stem cells to remain undifferentiated [57]. STM can inhibit the differentiation while maintaining the proliferation of meristem cells, and can also integrate mechanical signals that play a role in the formation of lateral organs [82]. Plant stem cells require induction niches. In SAM, this role is played by cells located in the organizing center (OC). At the molecular level, the OC is defined by highly localized expression of the homeodomain transcription factor WUS [83]. WUS fluidity is highly directional, but its specific mechanism has not yet been elucidated. CLV3, as a major stem cell-derived signal, connects WUS with STM. In Arabidopsis, WUS and STM form heterodimers and combine with the promoter region of CLV3, ensuring a stable number of stem cells [56]. CLV3 is a short secretory peptide modified after processing and translation. CLV3 peptides diffuse in the interstitial space and act by binding with a group of related leucine rich repeat (LRR) receptor complexes found on the plasma membrane [84]. The joint action of these receptors is to combine with CLV3 to activate intracellular signaling cascades. The net effect of CLV signaling is reduced WUS expression, defining a local negative feedback loop to induce WUS migration from the OC to stem cells in order to maintain their fate [85]. In addition, STM gene expression depends on WUS, and WUS-activated STM expression enhances WUS-mediated stem cell activity ( Figure 1) [47,56].
In addition, in SAM, the local regulatory system appears insufficient to synchronize stem cell behavior without developmental or environmental input. Communication between peripheral developmental organs and central stem cells in SAM is mainly controlled by phytohormones, among which auxin and cytokinin have the greatest impact [86]. Cy-tokinin acts as a cell cycle inducer and is important for WUS activation, while auxin mainly triggers peripheral differentiation [87]. Interestingly, auxin also enhances the output of cell division proteins by directly inhibiting the expression of negative feedback regulators of cytokinin signal transduction [86]. Recent studies have found that TOR kinases play a central role in metabolism, light-dependent activation of WUS, and stem cell activation in SAM [23]. RAM is mainly regulated by the auxin-dependent PLT pathway and the auxin-independent SHR/SCR pathway [88,89]. Key transcription factors such as SHR, SCR, and PLT1/2/3/4 play a crucial role in the organization and maintenance of RAM. SCR is expressed in the static center and endothelium, and SHR is expressed in the periapical stele cells. Both are necessary to maintain static center function and jointly provide signals for the stem cell microenvironment [48,49]. In addition, PLTs strongly affect the characteristics, cell expansion, and differentiation of stem cells and RAM by forming gradients which depend on the stability and movement of PLT proteins [90,91]. PLTs and auxin gradients are correlated, but also partially independent ( Figure 1A) [92,93].
The regeneration process includes tissue repair, de novo organ regeneration, the formation of wound-induced calluses, and somatic embryogenesis. Root tip repair involves a wounding response, redistribution of auxin and cytokinin, reconstruction of the quiescent center (QC), and stem cell niche re-establishment [94]. Studies have found that damageinduced jasmonic acid (JA) signaling can also activate stem cells to promote regeneration, and JA signaling regulates the expression of the RETINOBLASTOMA-RELATED (RBR)-SCR molecular network and stress response gene ERF115 to activate the root stem cell tissue center, thereby promoting root regeneration. Auxin activates WUSCHEL RELATED HOMEOBOX11/12 (WOX11/12) to transform root-initiating cells into the root primordium. During this process, the expression level of WOX11/12 decreases, whereas that of WOX5/7 increases. The WOX11/12 protein directly binds to the WOX5/7 promoter to activate its transcription, whereas WOX5/7 mutation leads to defects in primordium formation [65]. At the genetic level, the highly specific and QC-expressed gene WOX5 delineates QC identity and maintenance [95]. WOX5 activity most likely occurs through direct effect on cell cycle regulators. Plants with disrupted expression levels of WOX5 show aberrant differentiation rates of the distal stem cells, indicating the role of WOX5 in preventing stem cell differentiation [96]. In contrast to SAM, where auxin triggers differentiation, hormones need to specify niches and maintain cell proliferation in RAM. Cytokinin mainly acts far away from the root tip and promotes differentiation through mutual inhibition with auxin [97]. However, cytokinins have also been shown to counteract the unique properties of QC cells by reducing auxin input from the surrounding environment and inducing cell division [98]. Maintaining stem cell homeostasis in the stem and root niches is essential to ensure that sufficient numbers of new cells are generated to replace removed cells, as well as the proper differentiation and growth and formation of new tissues and organs. It is worth noting that RBR protein is a plant homologue of RB (a tumor suppressor protein) and plays a crucial role in SAM and RAM [99,100]. Like in animals, RBR in plants inhibits cell cycle progression by interacting with E2F transcription factor homologues. In addition, decreased RBR levels lead to increased numbers of stem cells, while increased RBR levels lead to stem cell differentiation, indicating that RBR plays an important role in stem cell maintenance. At present, RBR is a protein known to be involved in stem cell function, and is conserved between the animal and plant kingdoms [1]. Interestingly, TOR not only plays a role in SAM stem cell activation, but also promotes QC cell division in RAM ( Figure 1A) [101].
De novo root regeneration is the process by which adventitious roots form from wounded or detached plant organs. Auxin is the key hormone that controls root organogenesis, and it activates many key genes involved in cell fate transition during root primordium establishment [102]. The detached leaves of Arabidopsis thaliana can regenerate adventitious roots on hormone-free medium [103]. From 10 min to 2 h after leaf detachment, a wave of JA is rapidly produced in detached leaves in response to wounding, but this wave disappears by 4 h after wounding [104]. JA activates the expression of transcription factor gene ERF109 through its signaling pathway, which, in turn, up-regulates the expression of ANTHRANILATE SYNTHASE α1 (ASA1). ASA1 is involved in the biosynthesis of tryptophan, a precursor of auxin production. After 2 h, the concentration of JA decreased, resulting in the accumulation of JAZ protein, which could directly interact with ERF109 and inhibit ERF109, thus turning off the wound signal. In general, the post-injury JA peak promotes auxin production and, thus, promotes root regeneration from the cuttings. Root organogenesis also requires a strict turning-off of the JA signal [105].
Callus formation is one of the most important methods of plant regeneration. Studies have analyzed why calluses have regenerative capacity. Through single cell sequencing of Arabidopsis hypocotyl calluses, researchers confirmed that calluses are similar to the root primordium or root tip meristem, and can be roughly divided into three layers: the outer cells are similar to the epidermis and root cap of the root tip, the middle layer cells to the quiescent center (QC), and the inner cells to root tip initial vascular cells. It was found that middle layer cells of calluses had highly similar transcriptome characteristics to the QCs of root tip resting centers, and were also source stem cells for root and bud regeneration [59]. AAR12, of the cytokinin signal transduction pathway, is the main enhancer of callus formation [62]. APETALA2/ETHYLENE RESPONSE FACTOR (AP2/ERF) transcription factors, such as WIND1, ERF113/RELATED TO AP2 L (RAP2.6L), ESR1, and ERF115, in Arabidopsis thaliana are key regulators of rapid post-traumatic-induced regeneration when wounded. Wounding upregulates cytokinin biosynthesis and signal transduction, thereby promoting cell proliferation and callus formation [60,[106][107][108]. WIND1 can promote callus formation and shoot regeneration by upregulating ESR1 (Figure 1A) [45].
Plants can undergo multiple regenerative processes after wounding to repair wounded tissues, form new organs, and produce somatic embryos [109]. Plant somatic embryogenesis refers to the process by which somatic cells produce embryoids through in vitro culture [110]. This process can occur directly from the epidermis, sub-epidermis, cells in suspension, protoplasts of explants, or from the outside or inside of a callus formed from dedifferentiated explants. The transformation from somatic cells to embryogenic cells is the premise of somatic embryogenesis. In this process, the isolated plant cells undergo dedifferentiation to form a callus. The callus and cells undergo redifferentiation into different types of cells, tissues, and organs, and finally generate complete plants [111]. This process involves cell reprogramming, cell differentiation, and organ development, and is regulated by several transcription factors and hormones [112]. For example, the WUS gene regulates the transformation of auxin-dependent vegetative tissues to embryonic tissues during somatic embryogenesis [113,114]. Overexpression of WUS can induce somatic embryogenesis and shoot and root organogenesis. Ectopic expression of the WUS gene can dedifferentiate recalcitrant materials that do not undergo somatic embryogenesis easily to produce adventitious buds and somatic embryos [115]. Additionally, LEAFY COTYLEDON 1 (LEC1), highly expressed in embryogenic cells, somatic embryos, and immature seeds, can promote somatic cell development into embryogenic cells. Furthermore, LEC1 can maintain the fate of embryogenic cells at the early stage of somatic embryogenesis. At present, LEC1 is used as a marker gene for somatic embryogenesis in several species [116]. Unlike LEC1, LEC2 can directly induce the formation of somatic embryos, which may activate different regulatory pathways [117].
In recent years, through research on animals with strong regeneration capacities, such as planarians, leeches, and salamanders, it was found that the early stages of regeneration are jointly regulated by cell death/apoptosis-related genes, MAPK signal-related genes, and EGR [118]. In plants, programmed cell death (PCD) plays crucial roles in vegetative and reproductive development (dPCD), as well as in the response to environmental stresses (ePCD) [119,120]. Sexual reproduction in plants is important for population survival and for increasing genetic diversity. During gametophyte formation, fertilization, and seed development, there are numerous instances of developmentally regulated cell elimination, several of which are forms of dPCD essential for successful plant reproduction [121]. In the late stages of regeneration, many signal pathways participate in cell proliferation and regulation of various responses. The Wnt signaling pathway is widely distributed in invertebrates and vertebrates, and is a highly conserved pathway during evolution. Wnt signaling plays an important role in early embryonic development, organ formation, tissue regeneration, and other physiological processes [69,122,123]. Wnt proteins are a family of 19 highly conserved secretory glycoproteins that act as ligands for several receptor-mediated signaling pathways, including those that regulate processes throughout development [123]. The classic Wnt signaling pathway is mainly mediated by β-catenin. β-catenin is a multifunctional protein which helps cells respond to extracellular signals and influences by interacting with the cytoskeleton [124]. When Wnt binds to its membrane receptor, Frizzled (FZD), it activates the intracellular protein Dvl. Dvl receives upstream signals in the cytoplasm and is the core regulator of the Wnt signaling pathway. Wnt inhibits the function of the β-catenin degradation complex formed by APC, AXIN, CK1, glycogen synthase kinase 3β (GSK3β), and other proteins, thus stabilizing β-catenin in the cytoplasm. Stably accumulated β-catenin in the cytoplasm enters the nucleus and binds to the TCF/LEF transcription factor family to initiate the transcription of downstream target genes, such as c-Myc and cyclin D1, in order to promote regeneration. TCF/LEF transcription factor's association with β-catenin initiates the expression of key genes in the multiple Wnt signaling pathways [50,125]. The Wnt signaling pathway is important for human development and the maintenance and regulation of adult stem cells, but improper Wnt activation can lead to carcinogenesis [126]. For example, in the differentiation of mouse embryonic stem cells (mESC), Wnt activation of β-catenin signaling inhibits myocardial differentiation and promotes endothelial and hematopoietic lineage differentiation. During vertebrate embryonic development, Wnt activation induces ESCs to enter the anterior and posterior lamellar mesoderm (LPM). In pre-LPM, Dickkopf (Dkk) is secreted from the endoderm, preventing Wnt from binding to its receptor and leading to the induction of the cardiogenic mesoderm and the formation of cardiac progenitor cells (CPC) ( Figure 1B) [123].
Similar to Wnt signaling, Notch signaling is a highly conserved signaling pathway that is widely involved in various regeneration processes in different organs, such as the tail fin, liver, retina, spinal cord, and brain [127]. Notch signaling also plays an important role in the self-renewal and differentiation regulation of stem cells. In stem cell biology, Notch signal transduction is highly environmentally dependent, and the biological consequences of pathway activation vary from maintaining or expanding stem cells to promoting stem cell differentiation [128]. Researchers found that Notch receptors and ligand expression were up-regulated during zebrafish fin regeneration in 2003, and many studies have also shown that Notch signaling plays a key role in fin repair, regulating venous arterialization, and cell proliferation and differentiation [129]. Notch signaling can also regulate duct cell accumulation and biliary tract differentiation, promote the expansion and differentiation of liver progenitor cells, and antagonize Wnt signaling during liver regeneration. However, different Notch receptors have different effects on hepatocytes, confirming the complex functions of Notch signaling in the treatment of liver diseases [130]. Notch signaling is mediated by the interaction between Notch ligands and receptors in adjacent cells. There are four kinds of Notch receptors (Notch1-4) in mammals, which are composed of three parts: the extracellular domain (NEC), transmembrane domain (TM), and intracellular domain (NICD). The Notch protein is cleaved three times, and its NICD is released into the cytoplasm and enters the nucleus to bind to the transcription factor CBF-1, suppressor of hairless, Lag (CSL) to form a transcriptional activation complex. The CSL protein is a key transcriptional regulator in the Notch signaling pathway, which is also known as the classical Notch signaling pathway or the CSL-dependent pathway. It activates the Hairy Enhancer of Split (HES), Hairy, and Enhancer of split-related genes with the YRPW motif (HEY), homocysteine-induced ER protein, and other basic helix-loop-helix (bHLH) transcription factor families of the target genes [131,132]. For example, Notch signaling can enhance bone regeneration in the mandibles of zebrafish, and is reactivated after valvular damage in zebrafish larvae and adults, which is necessary in the initial stage of heart valve regeneration ( Figure 1B) [133].
In addition, the more conserved Hh pathway also plays a key role in adult tissue maintenance, renewal, and regeneration [134]. The Hh protein has been identified in many animals, from jellyfish to humans. Drosophila has only one Hh gene, while vertebrates have 3-5. All Hh proteins are composed of the N-terminal "Hedge" domain and the C-terminal "Hog" domain. The Hedge domain mediates protein signaling activity. The Hog domain can be further subdivided into the N-terminal Hint domain and the C-terminal sterol recognition region (SRR). The N-terminal Hint domain is sequentially similar to the selfsplicing intron, and the C-terminal SRR binds to cholesterol [135]. Hh signal transmission is mediated by two receptors on the target cell membrane, Patched (Ptc) and Smoothened (Smo). The receptor Smo is encoded by the proto-oncogene Smoothened and is homologous to the G-protein-coupled receptor. It is composed of a single peptide chain with seven transmembrane regions. The N-terminal is located outside the cell, and the C-terminal is located inside the cell. The amino acid sequence of the transmembrane region is highly conserved [136]. The serine and threonine residues at the C-terminal are phosphorylated sites. When protein kinase catalyzes, it binds phosphate groups. The members of this protein family have the function of a transcription promoter only when they maintain their full length and start the transcription of downstream target genes. When the carboxyl end is hydrolyzed by the proteasome, a transcription inhibitor is formed to inhibit the transcription of downstream target genes. Smo is a necessary receptor for Hh signal transmission. Glioma-associated oncogene transcription factors (GLI) are transcriptional effectors of the Hh pathway. Stimulated by Hh signal transduction activation, GLI proteins are differentially phosphorylated and processed into transcriptional activators that induce the expression of Hh target genes to initiate a series of cellular responses, such as cell survival and proliferation, cell fate specification, and cell differentiation [137,138]. A previous study found that Hh signaling mediates liver regeneration by regulating DNA replication and cell division. Treatment of mice with Hh inhibitors caused a slowing of cell proliferation and mitotic arrest, which led to the inhibition of liver regeneration. Mice treated with the Hh inhibitor vismodegib showed inhibited liver regeneration, accompanied by significant decreases in the expression of Hh-inducible factors GLI1 and GLI2 ( Figure 1B) [139].
The Hippo signaling pathway is a major regulator of cell proliferation, tissue regeneration, and organ size control [132]. Hippo is highly conserved in mammals, controlling development and tissue organ homeostasis; imbalances can lead to human diseases such as cancer [140]. The core of the Hippo pathway is the kinase cascade; that is, mammalian STE20-like1/2 (Mst1/2) (Hippo homolog) and Salvador 1 protein (SAV1) form a complex that phosphorylates and activates large tumor-suppressing kinases (LATS1/2). LATS1/2 phosphorylates and inhibits transcription coactivators such as Yes-associated proteins (YAP) and transcriptional coactivators with PDZ-binding motifs (TAZ) [141]. LATS1/2 is a protein kinase that plays an important role in the Hippo signaling pathway, and exhibits anticarcinogenic activity. LATS1/2 deletion enhances TAZ/YAP activity and directly activates oncogene expression [142]. During tissue damage, the activity of YAP, the main effector of the Hippo pathway, is instantaneously induced, which in turn promotes the expansion of tissue-resident progenitor cells and promotes tissue regeneration [143]. Recent animal model studies have shown that the induction of endogenous cardiomyocyte proliferation is crucial for cardiac regeneration, and inhibition of Hippo signaling can stimulate cardiomyocyte proliferation and cardiac regeneration [144]. TGF-β superfamily signal transduction plays an important role in regulating cell growth, differentiation, and development in many biological systems [145]. TGF-β signaling phosphorylates Smad proteins and transports them to the nucleus. Activated Smad proteins regulate a variety of biological processes by binding to transcription factors, leading to cell state-specific transcriptional regulation [146]. For example, TGF-β signaling in zebrafish promotes cardiac valve regeneration by enhancing progenitor cell proliferation and valve cell differentiation. In addition, TGF-β superfamily members also play important roles in the steady renewal and regeneration of the adult intestine ( Figure 1B) [147,148].
TOR signaling pathways are present in both animals and plants, and are also associated with regeneration. Plant growth is affected by light and glucose, which are known activators of the TOR pathway [149]. The TOR signaling pathway is involved in root and stem growth and callus formation, and TOR phosphorylates downstream cell cycle factor E2Fa to promote these processes [22,24]. Moderate expansion of the Akt gene in animals activates the mTOR signaling pathway and promotes cell proliferation [150]. GSK3β is a direct substrate of Akt and is inhibited by Akt during animal regeneration [151]. BR-INSENSITIVE 2 (BIN2) was the first plant GSK3-like kinase to be characterized by genetic screening. The kinase domain of the GSK3-like kinase found in Arabidopsis and rice has 65-72% sequence homology to human GSK3β [25,152]. Biochemical and genetic analyses have confirmed that BIN2 plays a negative role in BR signal transduction and the regulation of cell growth. However, in plants, it was found that TOR can regulate the phosphorylation level of the neglected ribosomal protein S6 kinase beta 2 (S6K2), and S6K2 can interact with BIN2 to directly phosphorylate BIN2 and regulate plant growth [153]. The conserved characteristics of TOR signaling in the normal physiology and regeneration of animals and plants suggest its important role in maintaining normal physiological homeostasis of animals and plants.
Applications of Regeneration Technology
The growth and development of animals and plants is a process of differentiation from pluripotent stem cells (fertilized eggs) to pluripotent stem cells, and then to specialized stem cells [154]. On the contrary, the terminally differentiated cells of animals and plants carrying complete genetic material also have the potential to transform into stem cells. In plants, somatic cells can restore their totipotency through dedifferentiation and regenerate intact plants. Consistent, in the study of animal cell dryness, it was also found that four transcription factors, octamer binding transfer factor 4 (Oct4), SRY box transfer factor 2 (Sox2), Kruppel like factor 4 (Klf4), and c-Myc, were transferred into mouse fibroblast cells, which can cause them to become iPSC. This discovery indicates that immature cells can develop into all types of cells [155,156]. Stem cells and their metabolites, from both plants and animals, are widely used in agriculture, animal husbandry, and regenerative medicine ( Figure 2 somatic cells can restore their totipotency through dedifferentiation and regenerate intact plants. Consistent, in the study of animal cell dryness, it was also found that four transcription factors, octamer binding transfer factor 4 (Oct4), SRY box transfer factor 2 (Sox2), Kruppel like factor 4 (Klf4), and c-Myc, were transferred into mouse fibroblast cells, which can cause them to become iPSC. This discovery indicates that immature cells can develop into all types of cells [155,156]. Stem cells and their metabolites, from both plants and animals, are widely used in agriculture, animal husbandry, and regenerative medicine ( Figure 2). Plant totipotent stem cells have good application potential in crop breeding. The totipotent stem cells of animals in the placenta can be cryopreserved to treat some diseases after adulthood. Plant pluripotent stem cells and their metabolites can be used in the development of drugs, health foods, and cosmetics. For animals, iPSC can produce various necessary organs, but at present, due to ethical constraints, artificial organs have not been allowed [154]. Artificial meat that can be made from animal multipotent stem cells can also be used for pet disease treatment. The unipotent stem cells of plants are also used for the extraction Plant totipotent stem cells have good application potential in crop breeding. The totipotent stem cells of animals in the placenta can be cryopreserved to treat some diseases after adulthood. Plant pluripotent stem cells and their metabolites can be used in the development of drugs, health foods, and cosmetics. For animals, iPSC can produce various necessary organs, but at present, due to ethical constraints, artificial organs have not been allowed [154]. Artificial meat that can be made from animal multipotent stem cells can also be used for pet disease treatment. The unipotent stem cells of plants are also used for the extraction of some pigment substances. In addition, purple shirt stem cells in a suspension culture can produce anti-cancer substances such as Taxamairin A and B, and the unipotent stem cells in milk have therapeutic potential in treating some animal diseases [157].
In agriculture, plant genetic transformation and callus culture are key processes in crop gene editing and breeding [158]. A previous study found that overexpression of the wheat WUSCHEL family gene TaWOX5 can significantly improve transformation efficiency, and that callus culture can aid wheat transgenics [159]. In Arabidopsis, the injury-inducing factor WIND1 can promote callus formation and bud regeneration by upregulating Arabidopsis ESR1 expression, and the esr1 mutant shows defects in callus formation and bud regeneration [61]. This finding is of great significance for in vitro plant tissue culture. Regenerating adventitious roots from cuttings is a common plant clonal reproduction biotechnology in the forestry and horticulture industries. Plant somatic embryogenesis also has broad application prospects in artificial seeds, haploid breeding, asexual reproduction, and germplasm conservation [160]. Plant viral diseases are serious agricultural diseases, significantly affecting the yield/quality of crops and leading to crop failure. Stem tip virus-free technology is the only effective biotechnology to be found thus far that can remove viruses from plants. It has been widely used in agricultural production to obtain virus-free seedlings, and has also been applied in potatoes, fruit trees, flowers, and other crops. Stem cells and their daughter cells of SAM from Arabidopsis thaliana can inhibit infection with the cucumber mosaic virus (CMV). The mechanism study found that viruses cause local WUS protein induction and accumulation in stem cells, as well as subsequent migration to surrounding compartments. By directly inhibiting protein synthesis in cells, the replication and transmission of viruses can be restricted, which can protect stem cells and their differentiated daughter cells from viral infections [161]. The WUS protein has anti-viral characteristics in plant stem cells, and can help plants resist viral invasion.
With growth in the global population and meat demand, the harmful effects of animal husbandry on the environment and climate will increase [162]. Moreover, animal-borne diseases and antibiotic resistance are harmful to humans [163]. A suggested method to reduce the consumption of animal meat is to increase the production of artificial meat through species iPSCs, which can also eliminate many environmental and ethical issues which occur with traditional meat production [164]. In 2013, Dutch biologist Mark Post produced the first piece of artificial meat in history by using the animal cell tissue culture method, which attracted widespread attention [165]. Animal cell culture artificial meat is mainly composed of skeletal muscle containing different cells. These skeletal muscle fibers are formed by the proliferation, differentiation, and fusion of embryonic stem cells or muscle satellite cells. They first isolated the growth-differentiable primitive stem cells. By adding a culture medium rich in amino acids, lipids, and vitamins, they accelerated cell proliferation and differentiation and obtained a large number of bovine muscle tissue cells [166]. The production of cultured meat requires robust cell sources and types. In order to achieve the scale required for the commercial production and sales of cultured meat products, it is necessary to further develop immortal special cell lines. In addition to technical challenges, the relationship between cultured meat and social/cultural phenomena and social systems must also be considered [167]. In the racing industry, tendon and ligament injuries are common problems that can end the careers of racehorses. Therefore, stem cell therapy has received attention in this field. Common clinical applications include the use of stem cells to treat tendon and ligament strains in the joints of horses [168].
Stem cell technology also has applications in the medical beauty industry. Some plants contain raw materials needed in cosmetics, and stem cell culture can overcome barriers such as low endogenous content and difficult extraction methods [169]. For example, plant cell culture technology can be used to derive certain mint-based hair care products [170,171]. Plants containing antioxidant substances, such as grapes and cloves, can be used in antiultraviolet light protection skincare products. Plant stem cells can be used to obtain these antioxidant components at a more efficient rate [172]. Although plant stem cells are widely used in the medical beauty field, their full potential remains to be explored due to the lack of scientific evidence and the large variety of flora that may have potential for stem cell culture. In addition, Taxus chinensis and Catharanthus roseus suspension cell cultures can also be used to produce taxol-and vinblastine-based anticancer substances [173,174]. Although promising advances have been made in the field of plant stem cells and their various applications, it is unclear whether plant-derived extracts and stem cell extracts have race-specific effects in humans.
Regenerative medicine is a new research area in the field of medicine. It uses biological and engineering methods to create lost or damaged tissues and organs so that they mimic the structure and function of normal tissues and organs [175]. At present, stem cell therapy is a widely used type of regenerative medicine therapy, and plays an important role in the treatment of chronic diseases, including autoimmune diseases, leukemia, heart disease, and urinary system problems [28,176]. Autoimmune Addison's disease (AAD) is an inevitably fatal disease in the absence of treatment. Affected patients must receive steroid replacement for life to survive. Studies have found that AAD can be improved by manipulating endogenous adrenal cortical stem cells to enhance adrenal steroid production [177]. Hematopoietic stem cell transplantation can be used to treat leukemia, and around 80-90% of leukemia patients show improvement after hematopoietic stem cell transplantation, of which 60-70% enter remission [178]. The cardiac regenerative medicine field is currently facing challenges due to the lack of cardiac stem cells in adults, low turnover rate of mature myocardial cells, and difficulty in providing treatment for injured hearts. At present, cell reprogramming technology has been applied to generate patient-specific myocardial cells through both direct and indirect methods [179]. Stem cell therapy can also be used to treat stress-induced urinary incontinence, and preclinical studies have made advances in regenerating the urethral sphincter by using secretory group cells or chemokines that can return repair cells to the injured site [180].
In addition, regenerative medicine is closely related to tissue engineering. At present, organ transplantation is still widely used to replace failed tissues and organs. However, with substantial increases in the demand for organ transplantation in recent decades, it is difficult to maintain an adequate supply of available organs [181]. The emergence of 3D biological printing technology has made up for the lack of supply of tissues and organs. Compared with traditional tissue engineering methods, 3D bioprinting utilizes a more automatic process and can create more advanced scaffolds with accurate anatomical characteristics, allowing the precise co-deposition of cells and biomaterials [182]. 3D biological printing technology is also used in cancer research, drug development, and even clinician/patient education [183]. However, there are still some issues with 3D biological printing technology, such as limitations with biological inks and printers, as well as the size of the end product. At present, bioprinted tissues are often small and composed of only a few cell types, resulting in limited function and scalability [184,185]. In addition, the cost of 3D biological printing is high, and the resolution requires further improvement.
Although stem cell therapy has good outcomes, it also has safety risks. For example, pluripotent stem cells have the ability to form teratomas themselves [186]. The IPS cells established using retroviral vectors are used to introduce exogenous genes, and their expression may be retained or reactivated during differentiation. This may have impacts on the directivity and carcinogenicity of differentiation [187]. To fully realize the benefits of regenerative medicine, the real and imaginary boundaries of social, ethical, political, and religious views must be addressed [188,189]. We must carefully measure the potential therapeutic benefits of the clinical application of stem cells and weigh them according to the possible side effects in each patient and disease indication, because the clinical use of stem cells can lead to overly high expectations. Our decision-making process regarding disease management should continue to firmly follow the conservative principles of evidence-based medicine.
Conclusions and Future Perspectives
At present, it is not uncommon to utilize stem cells in both medicine and agriculture, such as for the effective repair of damaged tissues and organs and to treat cardiovascular and metabolic ailments, as well as diseases of the nervous system, blood system, and others [161,190,191]. Recently, some research has revealed the "switch" mechanism underlying the brain regeneration of salamanders, and constructed a space-time map of brain development and regeneration of single salamander cells [192]. The next step in this field is to achieve brain regeneration in mammals, including humans, which would involve the activation of brain "seed cells" and the introduction of key factors, thus turning on the "switch" of human brain regeneration. It is expected that new treatment methods will soon be developed to improve the clinical rehabilitation of patients with brain diseases. In addition, the potential value of stem cells in anti-viral applications is of great interest. Plant stem cells can resist viruses, and animal stem cells can also use antiviral Dicer (AviD) to resist the invasion of multiple RNA viruses [193]. The antiviral mechanism of stem cells may be of great value for future medical and pharmaceutical research on human viral infection resistance.
Plant regeneration is mainly carried out through somatic embryogenesis or organogenesis; however, plant regeneration can be promoted by transferring plant-related genes [194]. With the rapid development of synthetic biology, this concept has been applied to the regeneration of animals and plants [195]. The concept of "build-to-understand" synthetic biology is instructive in the field of tissue regeneration, where more extensive and flexible research can be achieved by building genetic circuits. Using synthetic biology, we can import genes with strong regenerative abilities into rare and precious plants to increase their yield. CRISPR-Cas9 technology enables genome-wide epigenetic modifications to modify plant regeneration pathways or affect specific gene loci to regulate plant regeneration [196][197][198].
In this review, we have made a more detailed and systematic summary of the research of animal and plant stem cells in the field of regeneration in recent years, and described the regeneration mechanisms of animals and plants. In addition, we also proposed the application prospects of stem cells in agriculture, animal husbandry, and regenerative medicine, which would provide new ideas and directions for the protection of endangered species and the development of regenerative medicine. However, due to the lack of existing genetic information on higher animals and plants, current research is mainly focused on simpler species, such as Arabidopsis, planarians, etc. There is still a long way to go before applications in advanced endangered plants and regenerative medicine can be fully realized. However, with rapid developments in synthetic biology, single cell sequencing, and other technologies, research on higher animals and plants is becoming more feasible. It is believed that with more research, the mystery of regeneration will eventually be solved.
Acknowledgments:
The authors thank the teachers and students in our research team for their help and support.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2017-06-25T16:49:10.501Z
|
2013-03-04T00:00:00.000
|
1505703
|
{
"extfieldsofstudy": [
"Business",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-12-85",
"pdf_hash": "15c5306288af3d94a89845a6a14461cbe9b4bb17",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46553",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "15c5306288af3d94a89845a6a14461cbe9b4bb17",
"year": 2013
}
|
pes2o/s2orc
|
Design, implementation and evaluation of a national campaign to deliver 18 million free long-lasting insecticidal nets to uncovered sleeping spaces in Tanzania
Background Since 2004, the Tanzanian National Voucher Scheme has increased availability and accessibility of insecticide-treated nets (ITNs) to pregnant women and infants by subsidizing the cost of nets purchased. From 2008 to 2010, a mass distribution campaign delivered nine million long-lasting insecticidal nets (LLINs) free-of-charge to children under-five years of age in Tanzania mainland. In 2010 and 2011, a Universal Coverage Campaign (UCC) led by the Ministry of Health and Social Welfare (MoHSW) was implemented to cover all sleeping spaces not yet reached through previous initiatives. Methods The UCC was coordinated through a unit within the National Malaria Control Programme. Partners were contracted by the MoHSW to implement different activities in collaboration with local government authorities. Volunteers registered the number of uncovered sleeping spaces in every household in the country. On this basis, LLINs were ordered and delivered to village level, where they were issued over a three-day period in each zone (three regions). Household surveys were conducted in seven districts immediately after the campaign to assess net ownership and use. Results The UCC was chiefly financed by the Global Fund to Fight AIDS, Tuberculosis and Malaria with important contributions from the US President’s Malaria Initiative. A total of 18.2 million LLINs were delivered at an average cost of USD 5.30 per LLIN. Overall, 83% of the expenses were used for LLIN procurement and delivery and 17% for campaign associated activities. Preliminary results of the latest Tanzania HIV Malaria Indicator Survey (2011–12) show that household ownership of at least one ITN increased to 91.5%. ITN use, among children under-five years of age, improved to 72.7% after the campaign. ITN ownership and use data post-campaign indicated high equity across wealth quintiles. Conclusion Close collaboration among the MoHSW, donors, contracted partners, local government authorities and volunteers made it possible to carry out one of the largest LLIN distribution campaigns conducted in Africa to date. Through the strong increase of ITN use, the recent activities of the national ITN programme will likely result in further decline in child mortality rates in Tanzania, helping to achieve Millennium Development Goals 4 and 6.
Background
The year 2010 was the deadline set by the Roll Back Malaria (RBM) Partnership to reach universal coverage for all populations at risk with locally appropriate malaria interventions, as well as to reduce global malaria cases and deaths from 2000 levels by 50% [1]. This was to be accomplished through the scale up of core malaria control interventions, such as the use of insecticidetreated nets (ITNs), indoor residual spraying, intermitted preventive treatment for pregnant women and artemisinin-based combination therapy [1]. These activities were expected to contribute to the achievement of the malaria-specific Millennium Development Goal (MDG) 6 by 2015 [1]. Given that malaria accounted in 2008 for 16% of deaths in children under-five years of age in Africa, reduction in malaria is also critical for achieving MDG 4 [2][3][4].
ITNs are a very effective measure for malaria control and high use reduces the incidence of symptomatic malaria episodes by 50% and can lower rates of all-cause mortality up to 29% [5]. In Tanzania mainland the responsibility of sustainably and equitably scaling up ITN use lies with the Ministry of Health and Social Welfare (MoHSW). Therefore, the National Insecticide Treated Nets (NATNETS) Programme was established in 2000 under the National Malaria Control Programme (NMCP) of the MoHSW [6,7]. As of late 2012, the programme has worked with numerous partners and been financed by a range of bilateral and multilateral donors. The ITN Cell, a unit within NMCP, coordinates and facilitates all NATNETS activities. It is funded by the Swiss Agency for Development and Cooperation (SDC) through the NETCELL Project, which is implemented by the Swiss Tropical and Public Health Institute in Basel and provides staff and technical support to the unit [8,9].
One of the ITN distribution mechanisms implemented by the NATNETS Programme is the Tanzanian National Voucher Scheme (TNVS). The TNVS has made ITNs widely available and accessible to pregnant women (since 2004) and infants (since 2006) through a voucher system that subsidizes the cost of nets purchased in commercial retail outlets [10]. However, in 2007/8 the MoHSW and other stakeholders considered the rate of increase in ITN ownership and use through the TNVS as too low and inequitable to reach RBM targets of universal coverage by 2010 [11,12]. As a result, the policy to distribute nets at no cost to beneficiaries through mass campaigns was adopted by the NATNETS partners to complement the TNVS. Between August 2008 and May 2010 the Under-five Catch-up Campaign (U5CC) delivered about nine million long-lasting insecticidal nets (LLINs) free-of -charge to every child under the age of five years across Tanzania mainland [11]. Additionally, a plan was developed by NMCP and NATNETS partners in 2008 to implement a second campaign, called the Universal Coverage Campaign (UCC). The aim of the UCC was to issue free LLINs to all sleeping spaces not yet covered through the TNVS and the U5CC. Following receipt of funds from GFATM in late 2009 and the procurement of LLINs during the first half of 2010, implementation of the UCC commenced in July 2010 and was completed in October 2011. This report summarizes the design, implementation and evaluation of the UCC. The financial cost of the campaign and coverage data post-campaign are also presented.
Methods
The UCC was designed, implemented and evaluated in multiple, phased steps described below in chronological order ( Figure 1).
Financing, LLIN procurement and identification of contractors
In July 2008, Tanzania's Country Coordinating Mechanism submitted a Round 8 proposal to GFATM. The grant agreement between the Government of Tanzania and the GFATM was signed in September 2009. The aim of the UCC was to distribute free LLINs to all the sleeping spaces that had not been provided with a net through previous NATNETS activities and to achieve at least 80% net usage, defined as universal LLIN coverage [1].
An open and competitive international tender to supply and deliver LLINs to village level was issued in March 2010 by the procurement consultant (Mennonite Economic Development Associates (MEDA)) on behalf of MoHSW. The tender was awarded to the most competitive bidder, A-Z Textile Mills Ltd, manufacturer of the Olyset ™ net under license from Sumitomo Chemical Co.
The grant sub-recipients had already been identified by the Country Coordinating Mechanism during the development of the GFATM Round 1 Rolling Continuation Channel (RCC) proposal and as the two grants (RCC and Round 8) ran in parallel, the Country Coordinating Mechanism endorsed the utilization of the same subrecipients. After signature of the grant agreement, subrecipients were contracted by the MoHSW through its Procurement Management Unit. Contractors and their role are given in Table 1.
Coordination and planning
ITN Cell personnel together with other NMCP staff and in collaboration with the UCC contractors coordinated the planning and implementation of the nationwide campaign. Regular NATNETS steering committee and coordination meetings [6] were responsible respectively for overseeing and coordinating the UCC. Additionally, regular UCC task force meetings were held to ensure that UCC plans were followed and designated activities were organized, implemented and monitored.
At regional level, the campaign planning process began with a series of meetings with the regional and district officials. The purpose of these meetings was to fully inform and consult these officials with regard to effective LLIN delivery schedules and communication strategies as well as the training of local government stakeholders based on the demographic and geographic characteristics of each district.
To ensure optimal coordination and effective implementation, the 21 regions of mainland Tanzania were divided into seven zones: Southern, Southern Highlands, Central, West Lake, Lake, Coastal and Northern Zones ( Figure 2). UCC activities started in June 2010 and proceeded on a rolling basis from one zone to another until completion in October 2011 ( Figure 3).
LLIN quantification
The ITN Cell was responsible for the quantification of the LLIN requirements. One planning shortfall was that this exercise was conducted before the U5CC had begun and its lessons could be learned [11]. Thus, the number of required nets was derived from the National Bureau of Statistics projections for 2010 based on the 2002 Tanzanian National Census and the 2004 Demographic and Health Survey (DHS) data (Table 2).
UCC pilot
While awaiting the launch of the UCC, the logistics and training contractors along with NMCP implemented a pilot distribution to test and evaluate the effectiveness of the training materials and procedures. It was aimed at gaining field experience and building recommendations for future UCC planning and design. The pilot was conducted in three villages of the Ilemela District in Mwanza Region during 20 days in January 2010. The pilot resulted in a number of specific recommendations to improve training sessions and program design, which were incorporated into the main campaign (MEDA Tanzania: UCC Mwanza Pilot, Final Report. unpublished).
Regional level
Prior to initiating training activities at the regional level, field staff of the training contractor attended a Training of Trainers workshop to become familiar with the curriculum and training materials. Afterwards, at regional level, these field staff visited the Regional Medical Officer and the Regional Commissioner to brief them about the UCC and the required training and promotion activities to be held in their region. A sensitization meeting with each Regional Health Management Team was also held to discuss the UCC implementation.
District level
Similar activities were conducted at district level. The District Medical Officer (DMO) and the District Executive Director were informed about the UCC and the need to train local government officials. A one-day meeting was organized with Council Health Management Teams (CHMTs) in each district to explain the UCC. Additionally, the training contractor organized orientation for District Malaria Focal Persons (DMFPs), who teamed up with the trainers for the training of local government officials.
Division level
Local government officials such as Division Secretaries and Ward and Village Executive Officers (WEO and VEO) were instructed about the UCC in general and provided with the procedures regarding registration and issuing. They were informed about their responsibilities for supervising and reporting the whole process as well as for sensitizing the community. VEOs were also trained on how to select and train volunteers to perform the household sleeping space registration and LLIN issuing processes in each village (World Vision Tanzania: Universal Coverage Campaign, Final Implementation Report, unpublished).
Registration of sleeping spaces
Registration took place in all 14,255 villages in mainland Tanzania. The exercise was done in one zone (three regions) at a time. At village level, VEOs selected and trained four literate and respected community members as volunteers. During a five-day process volunteers conducted house-to-house registration under the supervision and coordination of the VEOs and WEOs. Upon arrival at each household the volunteers used universal coverage registration cards (UCRC) booklets, consisting of sequentially numbered and bar-coded coupons and their corresponding carbon copies, to record all sleeping spaces not yet covered through the TNVS and the U5CC. A sleeping space was defined as any bed, sleeping mat or floor space that could be potentially covered by a net. Afterwards, the head of the household was issued one UCRC coupon for each eligible sleeping space and was instructed to bring the coupon to the assigned LLIN issuing point on one of the three designated issuing days. Households with no one at home received a sticker on the door requesting the household to visit the VEO's office for registration. After the registration, all UCRC booklets were collected from the volunteers and used by the VEO to complete the Village Registration Report.
The WEO then compiled all Village Registration Reports into a Ward Registration Report. The CHMT then collected the Ward Registration Reports and paid allowances to the volunteers and the local government officials who participated in the registration activities. The reports were submitted to the logistics contractor who compiled all data into district packing lists. The logistics contractor also supervised the registration process together with CHMT members to ensure that procedures were correctly followed.
Buffer stock management and LLIN delivery Buffer stocks management
Based on the experience from the U5CC, the MoHSW decided to maintain two different buffer stocks of LLINs. The buffer stock at village level was an additional 5% over the number of registered sleeping spaces. By rounding the actual number of nets needed up to the nearest 40 (the number of nets in a single bale), another 2% on average was automatically added to the village buffer stock. At district level, and under the control of the DMO, an additional 23% of the total number of registered sleeping spaces in the district was used as buffer stocks in the first three zones, giving a total buffer of 30%. Since too many LLINs remained unused after issuing in these zones, the buffer stock at district level was subsequently decreased to 15%. Further, in order to reduce surplus buffer stocks at district level in the first five zones, remaining nets from these zones were redistributed to the Coastal and Northern Zones, the last zones to receive nets.
LLIN ordering, production, inspection and delivery
After the compilation of district-level registration data, the logistics partner prepared a district packing list with the numbers of LLINs to be delivered to each district and village, including the calculated buffer stocks. The logistics contractor then sent the zonal order to NMCP for the approval of the MoHSW and NMCP forwarded the official purchase order to the LLIN supplier. LLIN production was done by the local manufacturer, A-Z Textile Mills Ltd in Arusha, in several lots. The quality of each lot was inspected by an independent inspection company (Intertek International Ltd, Kenya). Most importantly, LLIN delivery to village level was the responsibility of the manufacturer. This was of enormous practical benefit to the programme as it took care of the most complex logistical problem, the distribution of the bulky nets to over 14,000 destinations throughout a country of 947,600 sq km with a limited road infrastructure [13].
When arriving at the recipient village, the supplier's representative met with the VEO and the government storage facility keeper to deliver the agreed number of LLINs. Once the LLINs were delivered, the VEOs (DMOs in the case of the district buffer stocks) were responsible for storage until the LLINs were issued to beneficiaries.
LLIN issuing
UCC issuing in each zone took place over three days (Friday to Sunday) to ensure a maximum number of eligible beneficiaries received an LLIN. Depending on its size, each village was divided into several predefined sectors and assigned one issuing point with two volunteers per sector to minimize travelling time to and waiting time at the issuing point. The VEO conducted training on issuing for volunteers and facilitated the storage and transportation of LLINs to the issuing points.
Beneficiaries brought their bar-coded coupons to the issuing point closest to their home and exchanged it for a LLIN. The thumbs of recipients were marked with an indelible ink to prevent them from claiming a second net and their thumbprint was also put on their coupons. The volunteers removed the barcode sticker from the net bag and placed it on the coupon to verify the transaction. Unregistered people, or people who had lost their coupons, were registered or re-registered by the VEO at the last day of issuing and given a LLIN from the buffer stock. Government officials from the NMCP participated in the issuing process by making courtesy calls at regional level and visiting selected issuing points for supervision. Donors also made periodic site visits to observe the issuing process.
After issuing was completed, the redeemed coupons were bundled and handed over to the VEO. The VEOs and WEOs then prepared respectively a Village and Ward Issuing Report. CHMTs collected the Ward Issuing Reports and paid allowances to the issuing volunteers and the local government officials. Reports and coupons were forwarded to the logistics contractor for compilation and scanning into the UCC LLIN database.
Hang-up campaign
To ensure correct use and hanging of LLINs, the Tanzania Red Cross Society conducted a LLIN hang-up campaign in all rural districts about one week following LLIN issuing. Existing volunteers and community members who participated in the U5CC hang-up campaign were used in most cases. CHMTs were informed about the activity and regional and district stakeholders were trained to instruct district supervisors, who in turn trained the local supervisors and volunteers. Volunteers visited 50 to 70 households per day over five to seven days. In case the net had not been hung, volunteers assisted with hanging up the net. They also demonstrated the proper use of LLINs and advised households regarding net maintenance and the importance of consistent net use. Additionally, each household was provided a poster or sticker on proper and consistent LLIN use throughout the year.
Social mobilization
The social mobilization contractor publicized the campaign before and during registration and issuing. This was done through mass media and community outreach activities, including television and radio spots, advertisements in the newspapers, rural film and cultural shows, brochures, posters, T-shirts as well as public meetings and announcements. Generally, the UCC mobilization efforts ensured that communities knew about the campaign plans and that the nets were free of charge to the beneficiaries. Also, the importance of sleeping under an ITN all year round was constantly emphasized.
Monitoring and evaluation
Household surveys followed completion of the UCC and hang-up campaign in seven districts: two in the Southern zone (Nachingwea and Mtwara Urban), three in the Lake zone (Sengerema, Rorya and Chato) and two in the Coastal zone (Kisarawe and Rufij) (A-C, Figure 2). Surveys in the Southern zone were done in March and April 2011 (middle of the rainy season), in Lake zone in June 2011 (soon after the rainy season), and Coastal zone in October 2011 (dry season). While most of the previous household surveys were conducted in the dry season, evaluation of the UCC was done during different seasons, which can make it difficult to compare data between zones due to seasonality of net hanging. The objective of the surveys was to assess household ITN ownership and use for different age and risk groups. ITN use is defined as the percentage of a given population group that slept under an ITN the night before the survey.
A total of 887, 592 and 580 households were surveyed in the Lake, Southern and Coastal zones, respectively. The zones were selected in line with the sub-national NATNETS surveys for which provision had been made in the NMCP M&E Plan 2008-2013. Districts within a zone were chosen based on the availability of baseline data from the 2008 NATNETS national survey (Marchant T, Bruce J, Nathan R, Mponda H, Sedekia Y, Hanson K: Monitoring and Evaluation of the Tanzanian National Net Strategy, Report on 2008 NATNETS Household, Facility services and Facility users surveys, unpublished). Sampling at district level was done by selecting 10 clusters (villages) with the selection probability proportional to the size of the village. Within each village, one sub-village was chosen using simple random sampling. Afterwards, 30 households were chosen in each selected sub-village by using a modified EPI-type sampling procedure resulting in a total of 300 households per district. Design of the questionnaire was primarily guided by the U5CC household survey tool and focused on household ownership and use of ITNs among different risk groups (under-fives, pregnant women, all household members). As in the standard Malaria Indicator Survey questionnaires, an ITN was defined as: 1) a factory-treated net that does not require any further treatment (LLIN), or 2) any net that has been soaked with insecticide within the past 12 months [14]. Additional questions were added to capture several process indicators specific to the UCC (e g, awareness of the UCC or indicators related to the UCC registration and issuing procedure) (Nathan R, Sedekia Y: Monitoring and Evaluation of the Tanzanian National Net Strategy, Universal Coverage Campaign, Household Survey Report-Coastal zone, Lake and Southern zones, unpublished).
An equity ratio, defined as the value for the lowest wealth quintile divided by the value for the highest wealth quintile, was used to assess equity across socioeconomic quintiles. Relative wealth was estimated as an index derived from a combination of the household head's education, housing conditions, asset ownership of the household and whether the house was rented or not. Weights for the variables were derived using principal components analysis, leading to a continuous variable. Households were then divided into quintiles according to the value of their score, ranging from the poorest (quintile 1) to the least poor (quintile 5) [15].
Sources of operational and financial data
Operational data were compiled from UCC final reports submitted to NMCP by the implementation partners. Further sources of operational data included: minutes of stakeholder meetings, e-mail exchanges, and internal NMCP documentation. For the financial data, all stakeholders were requested to indicate how much they contributed for the UCC per cost category, donor and grant (in the case of the GFATM). This information was cross-checked with the disbursements made by NMCP to contractors and a NATNETS expenditure overview prepared internally. Only direct financial expenses were compiled, without taking into account opportunity costs, the time spent by government officials and other indirect economic costs.
Coordination and planning
According to the original Round 8 proposal, UCC activities were supposed to start in February 2010 and be completed in October 2010. However, delayed grant signature, a lengthy process of tender document approval by GFATM and sub-recipient contracting delayed the UCC start until June 2010. Campaign activities were further disrupted in December 2010 when a revised Procurement and Supply Management plan was requested by GFATM to justify the increased requirement for nets exceeding the originally approved 14.6 million before their next disbursement could be made. As a result UCC issuing activities in the last two zones (Coastal and Northern) were put on hold between April and September 2011 pending the approval of the revised Procurement and Supply Management plan by GFATM and the associated interruption of the supplier's manufacturing schedule. The timetable with final registration and issuing dates is given in Figure 3.
LLIN quantification and procurement
As had been observed during the U5CC, the Census data from 2002 and its projections for 2010 were not sufficiently reliable to determine accurate LLIN requirements for the UCC [11]. Thus, a significantly higher number of nets were needed compared with the original estimate. This discrepancy is shown in Table 3 on a zonal basis. Fortunately, as a result of the competitive tendering process the price quoted for LLINs (including delivery) from the most competitive bidder was lower than the budget for commodities (USD 4.39 versus USD 6.01), which provided financial room to procure the required number of nets. As another positive consequence of this low price, institutional sleeping spaces could also be covered in the campaign although no such provision had been made in the original grant proposal.
Training
At district level, the training contractor organized a UCC introduction meeting for a total of 2,062 CHMT members, which corresponded to 107% of the original target. Additionally, 92% (486)
Registration of sleeping spaces
Registration of sleeping spaces was successfully completed countrywide by the end of February 2011 ( Figure 3). A total of 16,059,064 sleeping spaces were registered, including 15,422,453 counted at 9,925,952 rural and urban households and an additional 636,611 found in institutions (Table 3).
In those zones sampled for post-campaign surveys (Lake, Southern and Central) 89.6%, 92.6% and 95.9% of the surveyed households were registered, respectively. Most of these were registered before the issuing days (99% in all zones). Table 3 shows that a total of 18,204,040 LLINs were procured and delivered to village level. Of these, 17,617,891 were issued in three ways: to households during official issuing days (16,622,251) or afterwards to unmet household sleeping spaces (475,166) or to institutions (520,474). As a result, 96.8% of the LLINs delivered to village level reached beneficiaries. In Coastal and Northern zones more than 100% of nets delivered by the supplier were issued because these zones also received redistributed surplus buffer stocks from the previous zones. Unissued LLINs did remain in all zones at the end of the campaign but different interventions, like buffer stock adjustments and LLIN redistribution, resulted in their numbers being reduced to less than 2.6% of the total LLINs delivered. Compared to other zones, a significantly higher number of LLINs remained in the Coastal and Northern zones (11.43% and 7.62% of LLINs delivered to village level) as in these two zones remaining nets following issuing days were not recollected as had been done in other zones. It has been recommended that these LLINs be issued to the new high school and college students in the coming semester.
Delivery, issuing and buffer stock management
In Lake, Southern and Coastal zones 86%, 92% and 95% of the households received at least one LLIN. Among households that received at least one net, 47% (Lake), 30% (Southern) and 50% (Coastal) received two nets and 14% (Lake), 41% (Southern) and 17% (Coastal) got three or more nets. Mean traveling time spent to get to the distribution point was 25, 20 and 16 minutes in Lake, Southern and Coastal Zones. Overall, 87%, 90% and 94% of the respondent in the Lake, Southern and Coastal Zones spent less than an hour to get to the issuing point. Thus, it can be concluded that distribution points were located fairly close to the residences (Nathan R, Sedekia Y: Monitoring and Evaluation of the Tanzanian National Net Strategy, Universal Coverage Campaign, Household Survey Report-Coastal zone, Lake and Southern zones, unpublished).
Hang-up campaign
According to the routine data of the Tanzania Red Cross Society (TRCS), 25,191 volunteers visited 96% (6,912,456) of a total of 7,183,106 rural district households in mainland Tanzania. TRCS data from Northern and Coastal zone also show that in 87% of the households LLINs already hung at the time of the visit. In the same two zones volunteers assisted in hanging 47% of the nets that had not been hung before, resulting in five to eight nets a day if 50 to 70 households were visited. Assessed by the presence of the sticker that was to be delivered to the household by the hang-up campaign volunteers, household survey data collected by the monitoring and evaluation contractor showed that 44.9%, 43.9% and 47.2% of all the households that received at least one LLIN in Lake zone, Nachingwea District (only rural district within the Southern zone) and Coastal zone were visited (Nathan R, Sedekia Y: Monitoring and Evaluation of the Tanzanian National Net Strategy, Universal Coverage Campaign, Household Survey Report-Coastal zone, Lake and Southern zones, unpublished). However, these numbers may be underestimates as it is possible that households were visited, but stickers were not provided or removed later on.
Net ownership and use following the UCC
In Tanzania [16] has been reached [17].
ITN use of a given population group is shown in Table 4
Equity following the UCC
Estimates of household ITN ownership and ITN use by children under five years of age across socio-economic quintiles were used to assess equity. The equity ratio is depicted for both indicators over time (2005)(2006)(2007)(2008)(2009)(2010)(2011), in Figure 4. Results from national surveys indicate a strong improvement in equity over time and the continuation of these trends was confirmed in the zones covered in the sub-national surveys since 2008. Thus gains in ITN ownership and use over the last seven years were higher in the lowest wealth quintiles than in the highest.
Social mobilization
Based on project indicators of the social mobilization contractor all key deliverables were achieved or exceeded. For example, almost 50% more mobile video unit shows were conducted in rural areas than originally planned. The number of theatre shows, live radio programmes and posters were also above target. Press conferences were held, press releases issued, and prominent officials and personalities were invited for TV talk shows to ensure the campaign was widely publicized through print and electronic media. The only discrepancy between the planned and conducted activities was that more radio spots were placed in local radio stations and less in national stations, as during the campaign this approach was found to be more effective (Population Services International: Final Draft UCC Project Report, unpublished).
Awareness of the UCC among heads of households was very high, as shown by to the household surveys conducted after the campaign. Overall, 98.6%, 99.7% and 99.5% of the household heads in the Lake, Southern and Coastal Zones, had heard about the UCC. Radio was the most frequently cited source of information in the Lake (40%) and Southern (50%) zone, whereas in the Coastal Zone it was the community workers (37%). Additionally, the percentage of households registered before the issuing day was above 98% in all surveyed zones, supporting the fact that social mobilization efforts for the campaign were well received by a wide public (Nathan R, Sedekia Y: Monitoring and Evaluation of the Tanzanian National Net Strategy, Universal Coverage Campaign, Household Survey Report-Coastal zone, Lake and Southern zones, unpublished).
Total campaign financing
The direct financial cost of the UCC is shown in Table 5, stratified by cost categories and contributions by donors. The total financial cost of the UCC was USD 96,402,293, resulting in a financial cost per LLIN delivered of USD 5.30 of which USD 4.39 represented the cost of the net (including delivery by the supplier) and USD 0.91 all other campaign costs. At 82.9% (USD 79,915,736) the cost category "LLIN supply and delivery" has the highest share of expenses relative to the total financial cost, followed by logistics with 8.81% (Table 5).
The main donor of the UCC was the GFATM with USD 91,776,025-USD 89,505,803 coming from the Round 8 grant and USD 2,270,222 from the Rolling Continuation Channel grant of Round 1. Apart from GFATM, PMI contributed USD 4,364,173 for redistributing the surplus buffer stocks, for logistics and for conducting the hang-up campaign.
Conclusion
The close collaboration between the MoHSW, donors, contracted local partners, local government authorities Table 4 Household insecticide-treated net ownership and use among children under the age of five years, all household members and pregnant women for different geographic areas over time (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012) [17,24] Southern zone Lake zone Coastal zone Tanzania Year N % (95% CI) N % (95% CI) N % (95% CI) N % (95% CI) Household ITN ownership and volunteers strongly contributed to the success of the UCC. The fact that local government stakeholders were the key implementers of the campaign activities had several advantages. Local leaders and community members felt a sense of pride for being recognized as suitable persons to implement the campaign (MEDA Tanzania: Final Report Universal Coverage Campaign, unpublished). Importantly, the use of local government administrative officials rather than healthcare workers placed no additional burden on the already understaffed Tanzanian healthcare system [18]. Additionally, it fostered the collaboration between government authorities at different levels as well as local partners, which contributed to strengthening the public sector generally. However, the campaign required extra time and high commitment from the public sector. There was also a need to pay allowances as the work was not considered an integral part of the official work and some local authorities even requested additional reimbursement for the accomplished work. Lack of evidence for the impact of the hang-up campaign and rather sparse data on the impact of social mobilization are the main limitation in the exercise and would require further investigations. Limitations in the original evaluation with only seven out of 125 districts being monitored and evaluated could be compensated with the availability of the preliminary results of the latest nationally representative Tanzania [19]. Similar to Tanzania, Nigeria and DRC are using a rolling implementation mode on a state or province basis [20,21]. However, both countries receive considerable external operational support for campaign implementation [19][20][21], whereas in Tanzania most of the work was either done by the in-country LLIN manufacturer (transport logistics), local government entities, local NGO contractors or NMCP. For obvious reasons, being able to rely on existing systems that are not campaign-dependent has many advantages in terms of country ownership, operational sustainability and cost savings.
With an average cost of USD 5.30 per LLIN delivered, the Tanzanian UCC lies well below the median cost of delivery stated by the WHO in its latest World Malaria Report (USD 7.66) [4]. According to this report USD 5.30 is even less than the lowest price cited (USD 6.61) in studies conducted since 2005 [4]. This is supported by the review of White et al. on costs and cost-effectiveness of malaria control interventions, where the lowest financial cost per LLIN delivered was USD 6.01 [22]. Possible reasons for such a low price are (1) large size of this LLIN procurement, (2) local manufacturing and delivery capacity and (3) increased market competition with the number of WHOPES-recommended suppliers increasing from three in 2007 to 10 in 2011 [4]. In the case of the UCC, most of the expenses (83%) are accounted for by the cost of LLIN procurement and delivery. This compares favourably with average numbers mentioned by WHO according to which 70-85% of the cost is for LLIN procurement and 5-10% for LLIN delivery [4]. It is also comparable with the U5CC, where 80.4% of the total financial cost was used for procurement and delivery of LLINs [11].
Preliminary data from the THMIS collected after UCC completion indicate large improvements in ITN household ownership and use in all population groups. The results show that Tanzania is well on track to reach universal coverage, defined as at least 80% net usage, in the near future [1,17]. Further, it can be concluded that these achievements were reached in a fully equitable manner across wealth quintiles as demonstrated by an ownership and use equity ratio of 1 for all surveyed areas ( Figure 4). Similar results were shown in a study done to evaluate the UCC by West et al. in Muleba District, north-west Tanzania, where they found an increase in ITN household ownership of at least one ITN (from 62.6% to 90.8%) and ITN use of children under five years of age (from 56.5% to 63.3%) post UCC as well as no association between net ownership and poverty [23].
The outstanding progress made in these key indicators and the success of the UCC can be fully attributed to the activities of the NATNETS Programme and its partners. To scale-up ITN use in Tanzania mainland the NATNETS Programme invested from 2002 to 2011 approximately 300 million USD, out of which 57% came from the GFATM. Up to the end of 2011, the TNVS provided 7.8 million (22.7%) ITNs to pregnant women and infants. The U5CC delivered nationwide 9 million (26.2%) LLINs between 2008 and 2010 to children under five years of age. Thus, together with the 17.6 million (51.1%) LLIN issued through the UCC, a total of 34.5 million ITNs were distributed in Tanzania mainland from 2004 to 2011. As it can be seen in Figure 5, the increase in the number of ITNs delivered strongly correlates with the improvements in ITN use. The two campaigns conducted between 2008 and 2011 with a total financial cost of USD 160.2 million contributed massively to the steep increase in use. Regularly conducted nationally representative surveys (Demographic and Health Surveys and Tanzania HIV and Malaria Indicator Surveys) confirmed the sub-national data collected by the monitoring and evaluation contractor and provided national-level data [24][25][26].
In parallel to the massive increase in ITN use by all groups in the country, percentage of children under five years of age classified by rapid diagnostic test as having malaria decreased by 46% from 18.1% to 9.7% between 2007 and 2012 [17,26]. All-cause under-five child mortality fell by 45% between 1999 and 2010, from 147 deaths per 1,000 live births in 1999 to 81 per 1,000 live births in 2010 [27,28]. The progress and impact series of RBM about mainland Tanzania clearly states that even considering other factors that might explain the decline in all-cause under-five mortality, the role of malaria control in improving child survival is considerable. According to the LiST estimation model, which was used to estimate the number of lives saved among children under five in the same report, ITN scale-up efforts have averted more than 63,000 malaria deaths among children under five years of age in the past decade [27]. The MDG 4 target for Tanzania is to reduce the under-five mortality rate from 141 deaths per 1,000 live births in 1991 to a rate of 47 deaths per 1,000 live births in 2015 [29]. If the current decline seen between 1999 and 2010 continuous at the same rate, Tanzania will reach a rate of 51 deaths per 1,000 live births in 2015. Consequently, Tanzania will have almost achieved the MDG 4 (47 deaths per 1,000), as one of the few major African countries to do so.
However, to sustain this achievement in child mortality, malaria control efforts have to be maintained at the current high level. To do so, an ITN distribution strategy is urgently needed to maintain high coverage levels now that universal coverage has been reached (a so-called Keep-Up strategy [30,31]).
In the case of Tanzania mainland, one keep-up mechanism does fortunately already exist: the TNVS. At present the TNVS delivers around 1-1.2 million LLINs per year, less than 20% of the annual requirement of seven million LLINs to maintain universal coverage [32]. After extensive consultative deliberations, an additional strategy to directly deliver free LLINs through primary and secondary schools as a complement to the TNVS mechanism is believed to be the best way forward, considering operational feasibility, efficiency, yearly cost and impact [32]. Based on the calculations of the consultants, the combination of the TNVS and school net delivery will lead to a sustained coverage of about 82% use. If all households with possible access to nets through this combination were included, this strategy would cover 84% of all households representing 95% of the population [32]. The country is currently in the planning stages of piloting this new strategy. While Tanzania is well armed to successfully pursue malaria control activities, much of the success in the future will depend on (1) availability of international funding, and (2) increased domestic funding. National and international leadership is needed to ensure support is maintained for malaria control in the years ahead. Returning to an environment of holo-endemic malaria transmission across Tanzania cannot be considered an option.
|
v3-fos-license
|
2021-12-12T16:55:15.899Z
|
2021-12-09T00:00:00.000
|
245112586
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://sajae.co.za/article/download/12821/17819",
"pdf_hash": "a20e10b483087713d39ec9b0fbb9fa353d5ab143",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46555",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Sociology"
],
"sha1": "61f64d0da9b951fc05268b53c2bfbc9dc06c61ef",
"year": 2021
}
|
pes2o/s2orc
|
Classification and characterisation of smallholder farmers in South Africa: a brief review
The South African agricultural sector has experienced various transformation processes over the past 25 years, from a predominantly white commercial sector to a black focused sector with an emphasis on smallholder farming. The government is committed to supporting the smallholder farming sector through interventions that include land reform and access to water, amongst others. Despite these efforts, smallholder farmers remain vulnerable, especially during drought periods. Smallholder farmers are not homogeneous; instead, they are diverse, and their farming needs also differ according to their livelihood needs. Due to the diversity of smallholder farmers, it is difficult for the government to effectively respond to their needs. The 2015–2018 drought is a case in point. This paper assesses the challenges of defining and classifying smallholder farmers in South Africa. The complex Western Cape classification system is presented as a case study. The study concludes that there is a need for a simpler method of grouping the smallholder farmers based on their livelihoods to develop relevant support systems.
INTRODUCTION
The South African agricultural sector has gone through various transformation phases since 1994, with a major focus on smallholder farming. The evolution of the agricultural sector as articulated by various academics and scholars indicates the change from a white commercial sector to a black focused smallholder sector (DAFF, 2011;Kirsten and van Zyl, 1998;Hendricks, 2014;Vink and Van Rooyen, 2009;Pienaar, 2013). With the new constitution in 1994 came the deregulation of core functions of the government, although some responsibilities still reside with the national government. Government portfolios were rearranged into either national or provincial skills, with agrarian capacities classed as common capabilities (Oettle et al. 1998;Van Niekerk, 2012). Currently, the Department of Agriculture, Forestry and Fisheries is responsible for legislation and policy design while Provincial Departments of Agriculture implement the policies and legislation at the local level (Van Niekerk, 2012). Tshuma (2014) demonstrated how the current national and provincial governments are committed to supporting the smallholder farming sector through various interventions that include food security and land reform programmes, amongst others. At the centre of the policies and development programmes is the inclusion of female and youth agriculturists (Hart and Aliber, 2012). Right from 1994, legislation focused on the empowerment of the most vulnerable women and youth (1995 White Paper on Agriculture, the 1998 Agricultural Policy in South Africa discussion document, the 2001 Strategic Plan for South African Agriculture and the 2004 Comprehensive Agricultural Support Programme). Government interventions aimed to rectify the injustices of the past and to foster the development of rural communities (Aliber and Cousins, 2013). Unfortunately, the remote location of many rural communities is limiting their access to formal markets and job opportunities; therefore, these communities rely on agricultural production for their livelihoods (Steward et al., 2015). The government and investors rely on smallholder farmers to produce food for their households and create jobs for rural communities (Steward et al., 2015).
The efforts of the government and related stakeholders to eradicate poverty and enhance rural economic development through agricultural development are under constant critique (Chikazunga and Paradza, 2013;Tshuma, 2014;Hart and Aliber, 2012). During the developmental initiatives, new challenges emerge. The challenges include the vulnerability of smallholder farmers to climate change, natural disasters, and social unrest, including land reform programmes that fail (Ubisi et al., 2017). The fact that policies are designed by the national government and the implementation of the policies is the responsibility of the provincial government render the response programs to disasters not effective, and the smallholder farmers are disadvantaged (Agri SA, 2016). The recent 2015-2018 drought in South Africa is a case in point. There is, therefore, a need to create a common understanding of the characteristics and challenges that are faced by smallholder farmers so that appropriate response programs can be developed for periods of disaster. This paper is a brief review of some of the challenges of classifying smallholder farmers in South Africa. The objectives of the study were to review the characteristics and critique the current classification systems of the smallholder farmers in South Africa and to show how they limit support to smallholder farmers. A case study of the Western Cape classification system is given to show the complexity of the current classification systems.
THE SOUTH AFRICAN AGRICULTURAL SECTOR
The agricultural sector in South Africa is well known for its duality with a strong commercial farming component on the one hand and the smallholder component on the other (Kirsten and Van Zyl, 1998;Mmbengwa et al. 2012;Pienaar, 2013;Thamaga-Chitja and Morojele, 2014;Hendriks, 2014). The commercial sector is dominated by white farmers who are also the drivers of the agricultural economy with export markets and sustainable investment arrangements; hence the commercial sector is perceived as the successful farming sector in South Africa S. Afr. J. Agric. Ext.
The development of the rural agricultural sector has gained a lot of interest in the developmental arena since 1994 after the inception of the new government (Pienaar, 2013;Cousins, 2013;Thamaga-Chitja and Morojele, 2014). The South African Government focus is on capacity building of smallholder farmers as outlined in the National Development Plan (Pienaar, 2013). The Government intends to streamline support services towards the smallholder farmers to achieve the food security goals, including job creation and income generation for households. The National Development Plan states that the smallholder farming sector can build the rural economy through adequate extension and advisory services, increase in irrigated agriculture and cultivating of unproductive land in rural areas (DAFF, 2011;Cousins, 2013;Mvelase, 2016).
The private sector has also contributed to the development of smallholder farmers (Koch and Terblanchè, 2013). Different support services are provided to the farmers that vary from extension and support services, training, and mentoring and credit facilities where possible (Fanadzo and Ncube, 2018). However, the different stakeholders or partners in development find it difficult to streamline their support services to the desired target groups because there is no clear classification of smallholder farmers (Pienaar, 2013;Tshoni, 2015;Fanadzo and Dube, 2018). Cousins (2010) argues that literature fails to define smallholder farming because the different types of smallholder farmers are not considered. Van Averbeke et al. (2011) identified smallholder farmers as a group of households and individuals with several limiting factors that undermine their ability to embark on profitable interventions in the agricultural sector.
DEFINING SMALLHOLDER FARMERS
Various scholars and researchers attempt to define the smallholder farmers of South Africa (Cousins, 2013;Van Averbeke, 2011;Thamaga-Chitja and Morojele, 2014). Farmer typologies were previously used to categorise farmers into groups (Dunvernoy, 2000) and classify them. The diversity of different farmers was assessed using various variables to group farmers into different types (ibid). Farmer typologies have been mostly designed by academics, while the National Government also tries to categorise the smallholder farmers of the country. The South African Department of Agriculture (2015) defined smallholder farmers as those farmers who produce for household consumption and markets, subsequently earning ongoing revenue from their farming businesses, which form a source of income for the family. The farmers have the potential to expand their operations and to become commercial farmers but need access to comprehensive support (technical, financial, and managerial instruments).
Even though the government at large has promoted the continuous support of smallholder farmers for the past 25 years, information on these smallholders remains a scarce resource (Okunlola et al., 2016). The same authors also found critical contrasts amongst various groups of agriculturists who are frequently lumped together and identified shared traits that cut across S. Afr. J. Agric. Ext.
Carelsen, Ncube & Fanadzo Vol. 49 No. 2, 2021: 97-106 http://dx.doi.org/10.17159/2413-3221/2021 (License: CC BY 4.0) 100 their activities. Therefore, the term 'smallholder' or 'small-scale' is not helpful or enlightening, "and we need a more nuanced typology of black farming in South Africa" (ibid). Pienaar and Traub (2015) highlight the notion of referring to smallholder farmers by using different words that include small, small-scale, family, subsistence, emerging and smallholder. Smallholder farming households who rely on government grants as main sources of income are actively involved in agricultural production activities, mainly to supplement diets and reduce spending by buying less food from outlets (ibid). Smallholder farmers are not a homogeneous group of farmers who practice agriculture in the same fashion; instead, they are diverse, and their farming needs also differ according to their livelihood needs. This diversity amongst smallholder farmers makes it difficult to define the smallholder farmers (Pienaar, 2013;Tshoni, 2015;Fanadzo and Dube, 2018).
The Western Cape Department of Agriculture argued that the farming systems of the different producers are complex and their livelihoods strategies are diverse; therefore, support services targeted at these groups of farmers should be considered on farm level, taking into account the actual needs of the producers. The Western Cape Department of Agriculture continues to highlight that these producers should not be limited to government support but instead should be serviced by all the relevant actors in the sector on a comprehensive basis (WCDoA, 2017). Farmer typologies and definitions of smallholder farmers have been formulated to understand the smallholder farmers (Cousins, 2013;Greenburg, 2013), but the results do not create a clear understanding of these farmers. Perhaps using farmer livelihoods and resource endowment can provide a better understanding of the different types of smallholder farmers. However, little evidence has been found of studies examining the livelihoods strategies as a mechanism to characterise and classify the smallholder farmers.
CLASSIFICATION OF SMALLHOLDER FARMERS IN SOUTH AFRICA
Kirsten and Van Zyl (1998) discredited the attempt to classify smallholder farmers by using the size of the land as a variable because a high-value crop can deliver commercial outputs on a small piece of land such as one hectare, while five hundred hectares of low-quality land elsewhere might deliver low outputs. The authors suggested an interesting terminology for smallholder farmers: "A small farmer is one whose scale of operation is too small to attract the provision of the services he/she needs to be able to significantly increase his/her productivity." Kirsten (2011) suggested economic variables such as gross farm income, an amount of R500 000 and less farm income per year being suggested. However, this added to the complications that already existed. Greenburg (2013) identified two emerging issues. Firstly, using the economic variable to define smallholder farmers included subsistence producers or backyard farmers (farmers who maintained production only to supply food for their families). Secondly, all races were included; therefore it changed the landscape of the smallholder sector (Greenburg, 2013).
Smallholder farmers do not only produce food for the markets, but they also produce food for their own consumption (Van Averbeke and Khosa, 2007). The production of food by S. Afr. J. Agric. Ext.
Carelsen, Ncube & Fanadzo Vol. 49 No. 2, 2021: 97-106 http://dx.doi.org/10.17159/2413-3221/2021 101 smallholder farmers in rural communities becomes very important because of its dual function (income generation and also the supply of food for the family), therefore contributing towards the rural economy (ibid). Greenburg (2013) put rural agricultural production development at the centre of the South African government. The contribution of smallholder farmers towards food security in the rural areas has resulted in the government and private sector acknowledging the smallholder sector as important in South Africa. However, two of the main characteristics of these farmers are their low education levels and the limited access to land, with some smallholder farmers having access to less than one hectare of land for agricultural production (Tshuma, 2014). These characteristics, with other challenges such as lack of finance, pose challenges for the farmers to continue to produce sustainably, especially during long periods of drought.
For the South African Government to successfully formulate support programmes and design policies to create a vibrant smallholder farming sector, it is important to clearly define the smallholder farmer (Fanadzo and Dube, 2018). The authors also propose the consideration of farm typologies or farming styles to give guidance and solutions upon establishing the smallholder farmers in South Africa. We argue that characterisation and classification based on livelihoods is a more accurate approach. Fanadzo et al. (2021) found that farmers who had access to livelihood capitals/assets tended to cope and adapt better to drought than those who did not in studies in the West Coast and Overberg Districts in the Western Cape. The literature clearly illustrates diversity amongst smallholder farmers and the complexities that arise in developing policies to support them. During natural disasters such as droughts, the South African government continues to use blanket approaches when supporting smallholder farmers. The latter may be due to the lack of a clear understanding of the smallholder farmers' needs. This gap demonstrates the need to develop a better understanding of smallholder farmers and their needs, especially during drought periods.
CLASSIFICATION OF SMALLHOLDER FARMERS IN THE WESTERN CAPE
The Western Cape Department of Agriculture has categorised the farmers of the province. Different descriptions of farmers coupled with support interventions are articulated. Table 1 shows the classification of farmers as stipulated by the Western Cape Department of Agriculture (WCDoA, 2018). Within subsistence and smallholder farmers, there are already four categories, and there is a group called small commercial farmers.
Carelsen, Ncube & Fanadzo Vol. 49 No. 2, 2021: 97-106 http://dx.doi.org/10.17159/2413-3221/2021 (License: CC BY 4.0) 102 The definitions in Table 1 indicate that there is no all-encompassing classification of smallholder farmers even at the provincial level. The fact that it is difficult to define and classify the smallholder farmers makes it also difficult to respond to the needs of the farmers, especially during disaster events such as droughts. The current classification of farmers into different classes of farmers (subsistence, smallholder and commercial) is not helping much because the information needed to respond effectively to the needs of the different classes of farmers during disaster periods like droughts is lacking.
There is a need to consider other variables that exist amongst the farmers to characterise and classify them. The farmer characterisation and livelihoods approach is proposed as one such approach as already argued. However, little evidence has been found of smallholder farmer studies conducted in the Western Cape (Ncube, 2018;Ncube and Lagardien, 2015;Tshoni, 2015), especially towards examining the smallholder farmer characteristics. The Western Cape Province is also focused on fruit and wine exports. Furthermore, the Western Cape commercial farming sector is also known for its employment creation opportunities for people from various parts of South Africa, especially during fruit harvesting seasons (Nel, 2015). Therefore, there is a possibility of a mix of different individuals who may see themselves as smallholder farmers depending on the season. Fanadzo et al. (2021) recommend a need for in-depth livelihood studies that will consider wellbeing and try to understand the farmer's individual circumstances. The authors also recommended long-term socio-ecological studies to provide some of these answers.
CONCLUSIONS
The homogeneous concept employed by the government and extension service providers when developing support services for smallholder farmers is not helpful, especially during drought periods. There is a need for a more nuanced typology and classification of smallholder farmers to respond more effectively to natural disasters and climate hazards, using their livelihood needs. Characterising and classifying the smallholder farmers based on the livelihoods approach promises a more effective service delivery tool because it recognises theirentitlements, endowments, and capabilities. Furthermore, a long-term approach could provide the much-needed support to get smallholder farmers out of poverty.
|
v3-fos-license
|
2019-08-23T06:03:54.560Z
|
2019-07-31T00:00:00.000
|
201203326
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.5489",
"pdf_hash": "e6c59ac7e03ac152358341b3d6341005d8ea577c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46559",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "c8abc1d660c2330a38c8fef28893e2c4ce8827b2",
"year": 2019
}
|
pes2o/s2orc
|
Effects of contemporary agricultural land cover on Colorado potato beetle genetic differentiation in the Columbia Basin and Central Sands
Abstract Landscape structure, which can be manipulated in agricultural landscapes through crop rotation and modification of field edge habitats, can have important effects on connectivity among local populations of insects. Though crop rotation is known to influence the abundance of Colorado potato beetle (CPB; Leptinotarsa decemlineata Say) in potato (Solanum tuberosum L.) fields each year, whether crop rotation and intervening edge habitat also affect genetic variation among populations is unknown. We investigated the role of landscape configuration and composition in shaping patterns of genetic variation in CPB populations in the Columbia Basin of Oregon and Washington, and the Central Sands of Wisconsin, USA. We compared landscape structure and its potential suitability for dispersal, tested for effects of specific land cover types on genetic differentiation among CPB populations, and examined the relationship between crop rotation distances and genetic diversity. We found higher genetic differentiation between populations separated by low potato land cover, and lower genetic diversity in populations occupying areas with greater crop rotation distances. Importantly, these relationships were only observed in the Columbia Basin, and no other land cover types influenced CPB genetic variation. The lack of signal in Wisconsin may arise as a consequence of greater effective population size and less pronounced genetic drift. Our results suggest that the degree to which host plant land cover connectivity affects CPB genetic variation depends on population size and that power to detect landscape effects on genetic differentiation might be reduced in agricultural insect pest systems.
The spatial distribution of agricultural pest populations can be conceptualized with a metapopulation model (Slatkin, 1977) in which the amount and location of suitable habitat (host plants) changes over time, and only a fraction of pest individuals find and colonize host patches each generation. Under this framework, the founding pest population size at a given host patch depends on the effective distance between plant host patches, which can be increased via resistance of the intervening land cover to pest dispersal (McRae, 2006). Crop rotation and field edge habitats can increase the distance between host patches and can potentially increase landscape resistance to pest dispersal (Huseth, Frost, & Knuteson, 2012), exerting effects on pest genetic variation by reducing gene flow and effective population size (Pannell & Charlesworth, 2000).
In the United States, CPB exhibits a striking geographic pattern of insecticide resistance evolution, with rapid rates of evolution evident in the East, but slow rates of evolution in the West (Alyokhin et al., 2015;Crossley, Rondon, & Schoville, 2018;Olson, Dively, & Nelson, 2000;Szendrei, Grafius, Byrne, & Ziegler, 2012). Effects of agricultural landscape configuration (the distribution of different land cover types across the landscape) on CPB genetic variation might be an important contributor to this pattern of insecticide resistance evolution, via impacts on CPB gene flow and genetic diversity. However, it remains unclear how agricultural landscape structure affects CPB genetic variation and if land management practices are successful in reducing pest gene flow and genetic diversity.
Agricultural landscape structure can limit CPB dispersal primarily in the spring and fall, when beetles are searching for crops or solanaceous weeds and overwintering habitat. In the spring, beetles emerge from overwintering sites in the soil, and typically walk, rather than fly, in search of the nearest potato crop, orienting by olfactory cues (Boiteau, Alyokhin, & Ferro, 2003). Greater distances between potato fields together with unfavorable composition of the intervening landscape can impede navigation of CPB and make it difficult for CPB to reach a potato field; for example, water bodies, cereal crops, and grassland present barriers to CPB movement (Boiteau & MacKinley, 2015Huseth et al., 2012;Lashomb & Ng, 1984). On the other hand, because potato attracts and retains CPB, areas with sparse potato land cover could unexpectedly enhance connectivity, because successful migrants must travel farther than beetles from areas with dense potato land cover. Though most CPB do not typically travel farther than 1 km to find potato in the spring (Sexson, & Wyman, 2005;Weisz, Smilowitz, & Fleischer, 1996), mass flights during warm spring days prior to potato emergence are not uncommon (M. S. Crossley, personal observations) and wind-aided dispersal can facilitate migration over great distances (Hurst, 1975).
In the fall, beetles disperse to field margins, visually orienting toward the color contrasts created by wooded edges (Boiteau et al., 2003;Noronha & Cloutier, 1999;Weber & Ferro, 1993), though overwintering within potato fields can occur as well. If forested field margins act as a refuge for diapausing beetles, fall dispersal distances might be shorter in landscapes with a high proportion of forest.
Overwintering habitat, the stepping stone between spring and fall dispersal, can also indirectly modulate beetle dispersal by decreasing winter survival. After beetles arrive at a field margin, they tunnel into the soil and enter diapause (Tower, 1906). Survival through the diapause state can be influenced by soil composition: sandy soils retain less moisture, and permit beetles to dig deeper-avoiding colder temperatures near the soil surface (Hiiesaar, Metspalu, Joudu, & Jogar, 2006;Weber & Ferro, 1993). Thus, dispersal might also be constrained by underlying soil texture.
In this study, we examined the relationship between landscape structure and patterns of genetic variation among CPB populations, addressing the questions: Does crop rotation affect patterns of genetic variation across growing regions? Do populations isolated by more non-suitable habitat have greater genetic differentiation and less genetic diversity than other populations? We hypothesized that populations connected by more potato land cover, shorter rotation distances (in space), and more suitable overwintering habitat (forest land cover and sandy soils) would exhibit less genetic differentiation and higher genetic diversity, whereas populations separated by more grassland, grain crops, water bodies and greater rotational distances among potato fields would exhibit higher genetic differentiation and lower genetic diversity.
F I G U R E 1 Adult Colorado potato beetle (Leptinotarsa decemlineata Say) feeding on potato (Solanum tuberosum L.) in a commercial potato field in Wisconsin, USA We focused our study on regions representative of Northwestern and Midwestern potato agroecosystems: the Columbia Basin of Oregon and Washington, and the Central Sands of Wisconsin. CPB originally colonized the Central Sands during the 1860s (Riley, 1869;Walsh, 1866), but did not arrive in the Columbia Basin until the 1920s (Haegele & Wakeland, 1932;Mote, 1926). CPB population sizes tend to be smaller in the Columbia Basin than in the Central Sands (Crossley, Rondon, & Schoville, 2019). Landscape composition and climate also differ between these regions in important ways. The Central Sands is largely covered by forest, grassland (or pasture), and corn, while shrubland and wheat are the most abundant land cover types in the Columbia Basin ( Figure 2). However, both landscapes share many less abundant agricultural land cover types (e.g., water, hay, and various specialty crops), including potato. The Columbia Basin experiences significantly less precipitation than the Central Sands (cumulative annual precipitation = 2,314 mm in the Columbia Basin vs. 5,820 mm in the Central Sands), a factor that could amplify any dispersal-limiting effects of non-suitable land cover in the Columbia Basin. Generally milder winters can also contribute to a higher proportion of "volunteer" potatoes (plants resulting from unharvested tubers remaining in the field from the previous year), which could act as an important bridge between overwintering and summer habitat.
| Beetle sampling and sequencing
We focused our sampling on a 12,855 km 2 area in the Columbia Basin, and an 8,736 km 2 area in the Central Sands, collecting CPB from com- sequenced a 544 base-pair fragment of the mitochondrial genome (COI-COII), using the method described in Crossley et al. (2017), from 133 beetles in Oregon, and 50 beetles in Washington (and from several other sites in the USA; Tables S1 and S2), and combined these data with existing datasets (Crossley et al., 2017;Grapputo et al., 2005;Izzo, Chen, & Schoville, 2018). Mitochondrial DNA sequence data are available on GenBank (accession no. MK605454-MK605457).
The mitochondrial data provide an independent measure of population structure, and due to a smaller effective population size, can reveal strong patterns of genetic differentiation or changes in population size (Avise, Neigel, & Arnold, 1984). We visualized relationships among mitochondrial DNA haplotypes using a median joining network created in PopART (http://popart.otago.ac.nz/index.shtml ; Bandelt, Forster, & Röhl, 1999), setting epsilon = 0.
| Landscape composition and configuration
We obtained land cover rasters from the Cropland Data Layer 190,195). We then generated binary rasters for each land cover type (target land cover = 1, all other land cover = 0) that accounted for more than one percent of each study extent using functions available in the "rgdal" (Bivand, Keitt, & Rowlingson, 2014) and "raster" (Hijmans, Etten, & Mattiuzzi, 2019) R packages in R 3.4.2 (R Core Team, 2017). We computed landscape resistance to dispersal (McRae, 2006) using the commuteDistance() function in the "gdistance" R package ( van Etten, 2017). Resistance distances calculated with commuteDistance() and Circuitscape (McRae, Shah, & Edelman, 2013) have been shown to be equivalent (Marrotte & Bowman, 2017). We calculated geographic distance among sample sites using pointDistance() in the "gdistance" R package. Geographic distances were standardized by dividing by the standard deviation prior to landscape genetics analysis (with BEDASSLE; see below).
Prior to landscape genetics analysis, we removed variables that were highly correlated with geographic distance in both study extents, by examining Pearson correlation coefficients (using a threshold of r = .75) and axis loadings from principal components analysis (done with prcomp in R) on distances with all variables. This resulted in the removal of fallow cropland and wetlands. This analysis also revealed that "shrubland" and "grassland/pasture" classifications were frequently interchanged (possibly erroneously) among years; we therefore combined these land cover classifications into "grassland/ shrubland" for landscape genetics analysis. We averaged resistance distances for each land cover type across years and standardized resistance and geographic distances by dividing by the standard deviation, as recommended for downstream landscape genetics analysis (Bradburd, Ralph, & Coop, 2013). We averaged across years because, though the locations of some land cover types changed among years between 2007 and 2015, the overall composition of the landscape was stable and constrained the extent of changes in configuration. Though we wanted to account for the small changes that we did observe among years, we did not test effects of specific years because the resolution of our genetic data was not appropriate for parsing effects of specific land cover types in specific years.
To assess the suitability of landscapes in the Columbia Basin and Central Sands for CPB overwintering, we quantified the percent sand in the soil, and the amount and configuration of potato field and forest edges. We calculated the percent sand in the soil from
| Effect of landscape resistance on genetic differentiation
We estimated the effect of landscape resistance on genetic differentiation using the Bayesian Estimation of Differentiation in Alleles by Spatial Structure and Local Ecology framework (BEDASSLE; Bradburd et al., 2013). BEDASSLE employs a Bayesian approach to simultaneously identify the effect sizes of geographic distance and landscape resistance on differentiation in allele frequencies among populations. In addition to enabling a comparison of relative effect sizes (thus controlling for the effect of geographic distance while testing for an effect of landscape resistance), BEDASSLE does not assume a linear relationship between geographic distances, landscape resistance, and genetic differentiation (Bradburd et al., 2013). This is accomplished by estimating the rate of decay of allele frequency covariance with distance in the same model in which the effects sizes of geographic distance and landscape resistance are estimated. For each land cover variable, we ran the beta-binomial model several (between five and ten) times and adjusted tuning parameters to achieve acceptance rates between 20% and 70% (Bradburd et al., 2013). When acceptance rates are less than 20%, we consider our search of parameter space to have been too narrow and inefficient (probably having settled on a local, rather than global, optimal set of parameter values). Conversely, when acceptance rates exceed 70%, we consider our search to have been too unconstrained and erratic to have precisely identified the optimal set of parameter values (Bradburd, 2014). We then ran 30 independent beta-binomial Markov chains for four million steps per land cover variable. We assessed evidence for BEDASSLE model convergence by examining trace plots of the posterior probabilities and of the ratios of αE/αD (effect sizes of landscape resistance and geographic distance) and examining "scale reduction factors" calculated with a Rubin-Gelman test (using gelman.diag() in the "coda" R package (Plummer, Best, Cowles, & Vines, 2006)), which indicates model convergence when the variance in posterior probabilities within Markov chains is equivalent to the variance between Markov chains (upper 95% confidence interval of scale reduction factors approaches one). To test if effect sizes were sensitive to correlations among land cover types, we also jointly estimated effect sizes in a model including all land cover types.
| Effect of crop rotation on genetic diversity
We expected that populations occupying areas with greater crop rotation distances (in space) would exhibit lower genetic diversity, due to the negative effect of crop rotation on CPB dispersal and abundance in potato fields (Sexson, & Wyman, 2005;Weisz et al., 1996).
To test this, we first digitized potato fields that occurred the year prior to CPB sampling and within a 10 km radius of our sample sites, using the Cropland Data Layer (USDA-NASS), Google Earth imagery, and ArcMap (ESRI). We chose a 10 km radius to conservatively account for potential long-distance, wind-aided dispersal events, though the maximum expected spring dispersal distance is only 1.5 km (Sexson, & Wyman, 2005;Weisz et al., 1996). We then calculated distances between sample sites and the centroids of potato fields, because the distance to the field centroid represents the average distance a CPB would have needed to disperse in order to colonize the sample site. We then calculated observed heterozygosity (H O ) and average nucleotide diversity (π) in sampled CPB populations using the populations module of Stacks (Catchen, Amores, & Hohenlohe, 2011).
Observed heterozygosity was the average among SNPs of the proportion of genotypes that were heterozygous. Nucleotide diversity was calculated as Weir and Cockeram's π, again averaged across SNPs for each population. Lastly, we regressed average nucleotide diversity and observed heterozygosity on the median crop rotation distance (the median taken from among all possible distances between the focal field in year t and all potato fields in year/t located within 10 km of the focal field) using lm() in R. We also calculated average pairwise F ST between CPB populations using the method of Weir and Cockerham (1984) implemented with calculate.all.pairwise.
Fst() in the "BEDASSLE" R package (Bradburd, 2014). Figure S1). Wisconsin, on the other hand, possessed seven haplotypes, three of which were unique to Wisconsin. The most common haplotype was shared between regions, reaching a frequency of 89% among Wisconsin beetle samples. Across all Northwestern and Midwestern beetle samples, haplotypes rarely differed by more than one nucleotide substitution ( Figure S1).
| Landscape structure
Land cover composition differed greatly between the Columbia Basin (Oregon and Washington) and the Central Sands Factors that could affect overwintering success included the sandiness of the soil and the configuration of potato field and forest edges.
We found that the proportion of sand in surface soils around sample sites was similarly high between regions ( Figure S2), as expected for a crop cultivated in high-drainage soils. The edge density of potato was greater surrounding Central Sands sample sites at 1 and 5 km scales, but not at 10 km (Table 1). The edge density of forest was also significantly greater surrounding Central Sands sites (Table 1), as expected based on regional differences in the abundance of forest land cover.
Potato land cover was more interspersed among other land cover types in the Central Sands, regardless of scale (Table 1).
Estimates of landscape resistance to gene flow (calculated from land cover rasters with commuteDistance() in R) for potato land cover were moderately correlated with estimates of landscape resistance for other land cover types in both regions. In the Columbia Basin, resistance distances for grain (abundance in landscape: 59%), forest (58%), and bean (57%) land cover exhibited the highest correlation with that of potato, whereas in the Central Sands, grassland/shrubland (abundance in landscape: 77%), corn (74%), water (69%), and sand (65%) exhibited the highest correlation.
| Effects of landscape resistance on genetic differentiation
BEDASSLE models exhibited good convergence among chains after four million steps. Upper 95% confidence limits of scale reduction factors estimated with Gelman-Rubin tests were close to one (mean = 1.42; standard error = 0.07) (Table S3). Effect sizes of geographic distance and landscape resistance on allele frequency covariance were generally low, ranging from 10 -7 to 9 and centered at 10 -3 for geographic distance, and ranging from 10 -4 to 4 and centering at 10 -3 for landscape resistance. Values of α2, the parameter describing the rate of decay in allele frequency covariance with geographic distance, were significantly larger (covariance decayed more rapidly with increasing distance) in the Central Sands (Columbia TA B L E 1 Summary of class-level landscape metrics for potato and forest land cover in the Columbia Basin (Oregon and Washington) and Central Sands (Wisconsin), averaged among years (2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015) Scale ( F I G U R E 3 BEDASSLE estimates of the effect size of landscape resistance relative to geographic distance (αE/αD) on allele frequency differences in the Columbia Basin (Oregon and Washington) and the Central Sands (Wisconsin), with each land cover type analyzed separately (a) and all land cover types included in a single model (b). Boxplots represent the distribution of final parameter estimates across 30 independent Markov chains run for four million steps each. Right panels depict the same data as the left panels, but zoom out to visualize maximum outliers Basin mean α2 = 1.15, Central Sands mean α2 = 1.58; df = 1, 8018; F = 1,017; p « .0001).
Average relative effect sizes of contemporary land cover variables on genetic differentiation were consistently higher in the Columbia Basin ( Figure 3, Table S3), being highest for potato (αE/αD = 3,329), followed by corn (87), water (67), and grassland/ shrubland (50), but only the effect of potato in the Columbia Basin was significantly different from other land cover effects (p < .001).
When all land cover types were included in the same model, effect sizes became more even across land cover types (Figure 3, Table S3) and no land cover types exhibited statistically significant differences in average relative effect sizes. Overall reductions in effect size estimates were greater for the Columbia Basin than the Central Sands between models treating variables independently and together.
Nucleotide diversity (π), but not observed heterozygosity, decreased with increasing crop rotation distance in the Columbia Basin, though this relationship was only marginally significant (R 2 = 30%; p = .09) ( Figure 4). There was no relationship between crop rotation distance and genetic diversity in the Central Sands ( Figure S3).
| D ISCUSS I ON
We used contemporary samples of land cover and SNP genotype data to detect associations between landscape resistance and genetic differentiation among CPB populations in the Columbia Basin (Oregon and Washington) and Central Sands (Wisconsin).
We hypothesized that CPB genetic differentiation would decrease with increasing potato, shorter rotational distances (in space), and greater abundance of forest land cover and sandy soil between sites.
Conversely, we expected genetic differentiation would increase with increasing grassland/shrubland, grain, water cover and with larger rotational distances. Models that independently considered land cover effects on genetic differentiation identified a strong effect of potato land cover in the Columbia Basin but no landscape effects in the Central Sands ( Figure 3). Comparing land cover correlations and joint estimates of land cover effects in a single BEDASSLE model revealed no strong independent effects of any landscape variable ( Figure 3, Table S3), suggesting agricultural land cover has weak, correlated effects on CPB genetic differentiation. Parsing effects of correlated land cover types on genetic differentiation is a significant challenge in landscape genetics (Cushman, McKelvey, Hayden, & Schwartz, 2006), one that may be especially difficult in agricultural landscapes, where land cover can turnover rapidly and is correlated in space and time.
Importantly, we also observed a correlation between potato field rotation distances and nucleotide diversity, suggesting that crop rotation, in addition to reducing the timing and abundance of CPB infestations in potato fields, also acts to reduce the genetic diversity of local CPB populations.
| Land cover effects on genetic differentiation
In the Columbia Basin, sites connected by a high amount of potato land cover exhibited slightly lower genetic differentiation, consistent with the hypothesis that having an abundance of host plants facilitates dispersal and gene flow over the landscape. A similar relationship was found between genetic differentiation and grain (predominantly wheat) land cover. Wheat typically follows potato in crop rotation schemes in the Columbia Basin, but has not been a prevalent crop in the Central Sands since the early 1900s. Wheat is known to be a barrier to CPB dispersal by walking (Huseth et al., 2012;Lashomb & Ng, 1984;Schmera, Szentesi, & Jermy, 2007), but also harbors volunteer potatoes (plants growing from remnant, unharvested tubers) that serve as an early, systemic insecticide-free food source (Xu & Long, 1997). Thus, the effect of wheat reducing genetic differentiation could be a consequence of its close spatiotemporal association with potato in Columbia Basin agroecosystems and suggests volunteer potatoes may be important facilitators of gene flow.
The lack of a clear association between landscape resistance and genetic differentiation in the Central Sands could mean that any effects of land cover on CPB gene flow were too weak to detect.
Detection might be hindered if populations are very large (Wright, 1943) or are not in migration-drift equilibrium (Rousset, 1997) Nucleotide diversity (π) Median crop rotation distance (km) Instead, we suggest that gene flow among CPB populations in the Central Sands is relatively unconstrained by landscape composition or configuration and that this could be due to three factors.
First, CPB population sizes are substantially higher in the Central Sands than in the Columbia Basin. CPB dispersal is density-dependent (Boiteau et al., 2003;Harcourt, 1971), so higher population sizes could cause higher densities, facilitating higher gene flow as well as the maintenance of higher genetic diversity. Indeed, we frequently observe mass-migration events in densely CPB-populated Wisconsin potato fields. Second, land cover in the Central Sands is likely more suitable for overwintering success (there were higher amounts of potato and forest edge). Given that winter survivorship can be as low as 5% (Huseth & Groves, 2013)
| Crop rotation and genetic diversity
Consistent with the absence of an effect of landscape configuration on CPB genetic differentiation among CPB populations in the Central Sands, we found no relationship between crop rotation distances and genetic diversity in the Central Sands. In contrast, we found decreasing genetic diversity with increasing crop rotation distance in the Columbia Basin. This regional difference could be due to the generally larger rotation distances observed around our sample sites in the Columbia Basin or to differences in the suitability of the landscape for CPB dispersal. Importantly, our analysis did not identify any land cover types that specifically impede gene flow, suggesting that effects of crop rotation on CPB genetic variation are related to the sensitivity of dispersing CPB to other environmental factors in the absence of host plant (potato) land cover. One such environmental factor could be climate: There is much lower moisture availability in the Columbia Basin than in the Central Sands, which could reduce the amount of time and distance over which CPB can disperse before succumbing to desiccation and starvation.
| Management implications
Crop rotation is a powerful management practice, leveraging one of the most malleable features of the agricultural landscape: the spatiotemporal connectivity of crop land cover. The geographic distance between rotated potato fields and composition of the intervening landscape affects CPB dispersal and abundance (Huseth et al., 2012;Sexson, & Wyman, 2005). However, our data suggest that crop rotation does not always reduce gene flow and might have a limited effect on the spatial pattern of neutral and adaptive genetic variation in some landscapes. In regions like the Columbia Basin, however, crop rotation could have an important effect on patterns of genetic variation.
The reduced genetic connectivity observed between CPB populations separated by low potato land cover suggests that increasing rotation distances (in space and time) could reduce rates of adaptive gene flow and levels of genetic diversity and could limit the long-term viability of CPB populations in this region. Moving forward, we plan to investigate the importance of other environmental (e.g., climate, natural enemies) and operational (e.g., insecticide use) factors, in addition to landscape connectivity, in driving patterns of geographic variation in CPB genetic variation and adaptation to insecticides.
ACK N OWLED G M ENTS
We thank the editor and reviewers for helpful feedback on previous versions of this manuscript. We also thank Monica Turner, John Pool, Russell Groves, and Johanne Brunet for helpful comments on this manuscript. We are grateful for the generous grant support from the Hatch Act formula funds (#WIS01813 and #WIS02004), the Wisconsin Potato and Vegetable Growers Association, the
University of Wisconsin Consortium for Extension and Research in
Agriculture and Natural Resources, and NSF IGERT (#1144752).
CO N FLI C T O F I NTE R E S T
The authors declare no competing interests.
AUTH O R CO NTR I B UTI O N S
MSC conceived of the study, gathered and analyzed data, and wrote the manuscript. SIR gathered data, provided funding, and wrote the manuscript. SDS gathered data, provided funding, and wrote the manuscript.
DATA AVA I L A B I L I T Y S TAT E M E N T
llumina reads are available from the National Center for Biotechnology Information Short Read Archive (accession no.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-11-28T00:00:00.000
|
9284255
|
{
"extfieldsofstudy": [
"Engineering",
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/11/12/11273/pdf",
"pdf_hash": "bd23f674454abb13f1a13d8b0c4a127792a7afa7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46562",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"sha1": "bd23f674454abb13f1a13d8b0c4a127792a7afa7",
"year": 2011
}
|
pes2o/s2orc
|
Modeling and Analysis of an Energy-Efficient Mobility Management Scheme in IP-Based Wireless Networks
An energy-efficient mobility management scheme in IP-based wireless networks is proposed to reduce the battery power consumption of mobile hosts (MHs). The proposed scheme manages seven MH states, including transmitting, receiving, attention/cell-connected, attention/paging area(PA)-connected, idle, off/attached, and detached states, to efficiently manage battery power, radio resources, and network load. We derive the stationary probabilities and steady state probabilities of the seven MH states for the proposed scheme in IP-based wireless networks in compact form. The effects of various input parameters on MH steady state probabilities and power consumption are investigated in the proposed scheme compared to the conventional scheme. Network costs such as cell updates, PA updates, binding-lifetime-based registrations, and paging messages are analyzed in the proposed and conventional schemes. The optimal values of PA size and registration interval are derived to minimize the network cost of the proposed scheme. The combined network and power costs are investigated for the proposed and conventional schemes. The results provide guidelines to select the proper system parameters in IP-based wireless networks.
Introduction
Wireless networks and systems have evolved toward an IP-based network architecture. In such networks and systems, mobility needs to be handled at the IP layer based on the Internet Engineering Task Force (IETF) concept. Many IP-based mobility protocols, such as Mobile IPv4 (MIPv4) [1], Mobile IPv6 (MIPv6) [2], and IP micro-mobility protocols (e.g., Cellular IP [3] and HAWAII [4]) have been proposed and studied. MIPv4 [1] and MIPv6 [2] do not distinguish idle mobile hosts (MHs) from active MHs. These MIP protocols support registration but not paging. Hence, a care-of-address (CoA) needs to be updated whenever an MH moves to a different subnet which is served by a different foreign agent (FA) in the MIPv4 [1] or by a different access router (AR) in the MIPv6 [2] without regard to the MH states, i.e., active and idle. This results in a significant waste of the MH battery power and an unnecessary signaling load because it is expected that wireless IP users are not actively communicating most of the time.
Various schemes on IP paging services for MHs have been studied. The P-MIP [5][6] is an extension to MIP which is proposed to reduce the signaling load in the core Internet. In this scheme, two MH states, i.e., active and idle, were defined. In the active state of an MH, registration occurs whenever the MH changes its cell. On the other hand, in the idle state of an MH, a registration occurs only if the MH changes its paging area (PA). When there are any incoming data for the idle MH, paging is performed in order to find the exact location of the called MH.
Many enhancements for the MIP protocols, such as hierarchical Mobile IPv6 (HMIPv6) [7] and fast handover for Mobile IPv6 (FMIPv6) [8], have been investigated. Recently, the IETF Network-based Localized Mobility Management (NETLMM) working group proposed the NETLMM protocol [9][10]. The Proxy Mobile IPv6 (PMIPv6) [11] was also developed by the IETF NETLMM working group. In PMIPv6, the network supports IP mobility management on behalf of the MHs. Qualitative and quantitative comparisons between MIPv6 and PMIPv6 have been investigated by Kong et al. [12]. The PMIPv6-based global mobility management architecture and protocol procedure known as GPMIP was presented by Zhou et al. [13].
In our earlier study [14], we proposed a mobility management scheme that considered the detached and off states for IP-based mobile networks. We analyzed an optimal rate of binding-lifetime (BL)-based registrations which yields a minimum network cost when the registrations are utilized as a means of identifying the off state of MHs. To reduce MH power consumption, it is important to manage the idle MH state in the mobility management scheme for an efficient battery power management of MHs. In many IP-based mobility management schemes including that in our earlier study, MHs perform a PA-based registration in the idle or dormant state. However, to operate the MHs as a fully power-saving mode in the idle or dormant state, the state for the PA-based registration may need to be distinguished from the idle or dormant state.
In our earlier study [15], we proposed six MH states in which the communicating state was not divided into transmitting and receiving states. We derived the state transition probabilities and mean sojourn times in six MH states, and analyzed the effects of parameters based on simple exponential distribution assumed on session holding time and binding-lifetime. In our recent study [16], we derived the stationary probabilities and steady state probabilities of the proposed six MH states. The effects of session holding time on MH steady state probabilities and power consumption were investigated. We considered more practical distributions, i.e., Erlang and Gamma distributions, on session holding time.
In this paper, we propose an energy-efficient mobility management scheme for reducing MH power consumption in IP-based wireless networks. The proposed scheme manages seven MH states, including transmitting, receiving, attention/cell-connected, attention/PA-connected, idle, off/attached, and detached states, to efficiently manage battery power, radio resources, and network load. To compare the power-saving effect of the proposed scheme with that of the conventional scheme, we derive the stationary probabilities and steady state probabilities of both the proposed scheme with the seven MH states and the conventional scheme for Mobile IP-based wireless networks in compact form. The effects of various input parameters on MH steady state probabilities and power consumption are investigated in both the proposed scheme and conventional scheme with consideration of exponential and fixed distributions on the interval of BL-based registration. Network costs such as cell updates, PA updates, binding-lifetime-based registrations, and paging messages are analyzed in the proposed and conventional schemes. The effects of various input parameters on the network costs for the proposed and conventional schemes are investigated. The optimal values of PA size and registration interval are derived to minimize the network cost of the proposed scheme. We also investigate the combined network and power cost for the proposed and conventional schemes using various weighting factors. These analytical results provide guidelines to select the proper system parameters. The results can be utilized to analyze the performance of mobility management schemes in IP-based wireless networks. This paper is organized as follows: An IP-based wireless network architecture and an energy-efficient mobility management scheme are presented in Section 2. The MH state transitions are modeled, and the stationary probabilities and steady state probabilities of the seven MH states are analyzed. The MH energy consumptions as well as the network costs for both the proposed and conventional schemes are analyzed. The optimal values of PA size and registration interval are derived to minimize the network cost in Section 3. Numerical examples are used to investigate the MH steady state probabilities, the power saving effect, and the network costs compared with the conventional scheme for Mobile IP-based wireless networks in Section 4. Finally, conclusions are presented in Section 5.
IP-Based Wireless Network Architecture and MH State Transition Model
An IP-based wireless network architecture is shown in Figure 1 [15]. An access router (AR) provides MHs with IP connectivity. The AR acts as a default router to the currently served MHs. Since MIPv6 [2] provides many advantages over MIPv4 [1], MIPv6 [2] is considered a reference mobility protocol in this paper. However, we note that the proposed scheme can be applied to both the MIPv4 and MIPv6-based wireless networks. An energy-efficient mobility management scheme is proposed to manage the following seven MH states: transmitting, receiving, attention/cell-connected, attention/PA-connected, idle, off/attached, and detached states. Transmitting, receiving, and attention/cell-connected MHs behave in the same manner as MIP. The correspondent node (CN) and home agent (HA) do not need to be changed. The MH and paging agent (PAgnt) require only minor changes. The PAgnt conducts paging-related functions and manages one or more PAs. Two or more ARs can exist in a PA. To establish the PA identity, a unique PA identifier (PAI) can be used. A transmitting, receiving, or attention/cell-connected MH registers its collocated care-of address (CCoA) at the corresponding HA as in MIP. Hence, the PAgnt does not need to be involved in the MIP registration procedure. When a transmitting, receiving, or attention/cell-connected MH moves to a different cell which is served by a different AR, the MH conducts cell-based registration in the same manner as the MIP registration. After the data session is completed, the MH enters the attention/cell-connected state and an attention timer is reset and restarted. The attention timer is used to decide the instant when the MH enters the idle state. If the attention timer expires, an attentive MH that is in the attention/cell-connected state or attention/PA-connected state enters the idle state by conducting PA-based registration. Through this PA-based registration, the MH can register a PAI of the current PA at the PAgnt. A paging agent care-of address (PAgnt-CoA) of the current PAgnt is registered at the corresponding HA of the MH. Whenever an idle MH moves to a different PA or PAgnt, the MH enters the attention/PA-connected state to conduct the PA-based registration and the attention timer is reset and restarted.
When data packets which are destined for an idle MH arrive at the HA, the packets are tunneled to the PAgnt. Hence, the HA is unaware of the idle state of the MH. The PAgnt buffers the data packets and sends paging request messages to ARs in the PA. The signaling messages can be sent to MHs via access points which are connected to the ARs. The corresponding idle MH enters the receiving state, and sends paging reply messages to the PAgnt. Concurrently, the MH registers its CCoA at the HA as the MIP registration. The PAgnt can forward the buffered data packets.
The MH power-off state can be detected by a BL-based registration and an unsuccessful paging. When the HA or the PAgnt sets a limitation on the maximum binding lifetime, the BL-based registration can be used to detect the power-off state of MHs. The network considers the MH state as detached when it detects a silence for more than an agreed time period or the MH does not respond to paging.
In the proposed scheme, an MH has the following seven states: (1) Transmitting: The MH registers its CCoA at the corresponding HA as MIP. In this state, the MH has outgoing sessions. The MH remains during the session holding time. The MH exits this state upon completing outgoing data sessions or a switch-off action. (2) Receiving: After the MH registers its CCoA at the corresponding HA as in MIP, the MH can receive the data packets. An exit from this state is caused by incoming session completion or switch-off action. (3) Attention/cell-connected: There is no incoming or outgoing session for the MH. The MH conducts a cell-based registration whenever it enters this state from the off/attached state or detached state or it changes its serving AR. Thus, the MH location is known in the network with cell accuracy. When an incoming or outgoing session arrives, the MH enters the transmitting or receiving state. When an attention timer expires, the MH enters the idle state by performing a PA-based registration. (4) Attention/PA-connected: The MH conducts the PA-based registration. Thus, the MH location is known in the network with PA accuracy. When an attention timer expires, the MH reenters the idle state. When an incoming or outgoing session arrives, the MH enters the communicating state. If the MH is switched off, it enters the off/attached state. (5) Idle: The MH is not currently involved in ongoing sessions and signaling messages. Thus, the idle MH can operate in a power-saving mode. When the idle MH moves to a different PA or PAgnt, the MH enters the attention/PA-connected state to perform the PA-based registration. When an MH is in the idle state, its current location information is maintained in terms of PA. (6) Off/attached: If the MH is powered off, the PAgnt is not immediately informed of the power-off state. The power-off state can be detected by a BL-based registration and an unsuccessful paging. When the binding-lifetime expires or paging is unsuccessful, the network detaches the MH. When the MH is switched on, it enters the attention/cell-connected state by performing the MIP registration. (7) Detached: If the network detects an MH switch-off action, it detaches the MH. The MH neither responds to paging nor sends location registration messages.
Stationary Probabilities and Steady State Probabilities
We derive the stationary probabilities and steady state probabilities of seven MH states for the proposed energy-efficient mobility management scheme in IP-based wireless networks in compact form. MH state transitions are shown in Figure 2. We assume the following density functions of random variables: • Incoming (receiving) and outgoing (transmitting) sessions occur at an MH according to a Poisson process with parameters λ i and λ o , respectively. • The cell and PA residence durations are exponentially distributed with parameters 1/λ c and 1/λ PA , respectively. • Switch-off actions take place according to a Poisson process with a parameter of λ off .
• The duration that an MH remains switched-off follows an exponential distribution with a parameter of 1/µ off .
Since the residence time of the MH in each state is not exponentially distributed, we analyze the MH state transitions using a semi-Markov process approach [15][16][17]. The stationary probabilities of the imbedded Markov chain are obtained by solving the balancing equations: where π i denotes the stationary probability of state i while P ij denotes the state transition probability from state i to state j. The state transition probability matrix P = [P ij ] for the MH state transitions is expressed as: From Equations (1-3) the stationary probabilities of the seven MH states are solved as: where D consists of the state transition probabilities P ij . The steady state probabilities of the semi-Markov process are obtained by: where the values of the mean sojourn times of the MH in each state i,T i are expressed as [15]: The steady state probabilities of the seven MH states for the proposed scheme are derived in compact form as [16]: where λ i denotes the mean arrival rate of incoming sessions, λ o denotes the mean arrival rate of outgoing sessions, 1/λ c denotes the mean cell residence duration, 1/λ P A denotes the mean PA residence duration, λ off denotes the mean switch-off rate, 1/µ off denotes the mean switch-off duration, and T A denotes the attention timer value. The session holding time is assumed to follow a general distribution with a density function f s (t) with a mean 1/µ s while F * s (θ) is the Laplace transform of f s (t). Since the Gamma distribution has the same trend as a Pareto distribution in terms of variance impact, it is useful for data packet transmission times [18][19][20][21].
The Erlang and Gamma distributions can be used for the session holding time. The interval of BL-based registration is assumed to follow a general distribution with a density function f r (t) with a mean of 1/λ r while F * r (θ) is the Laplace transform of f r (t). Off/
The Power-Saving Effect
The power-saving effect of the proposed scheme is analyzed compared to the conventional scheme for Mobile IP-based wireless networks. To analyze the power-saving effect, let P c i denote power consumption in state i. The energy consumption for the proposed scheme is obtained by: When an idle MH moves to a different PA or PAgnt, the MH enters the attention/PA-connected state to perform the PA-based registration. The idle MH enters the communicating state when either an incoming or outgoing session arrives. Thus, the idle MH can operate in a power-saving mode since the idle MH may not have communicating sessions or signaling messages. The power consumption in each state of the MH has the following condition: Since the conventional MIP protocol [1-2] supports registration but not paging, it does not distinguish active MHs from idle ones. Hence, the value of the attention timer T A approaches ∞. Therefore, the steady state probabilities for the conventional MIP scheme are expressed as: Then, the energy consumption for the conventional MIP scheme is obtained by: From Equations (23)(24)(25), the relationship of the energy consumption between the proposed and conventional schemes is expressed as E prop ≤ E conv . Therefore, the energy consumption for the proposed scheme is always lower than that for the conventional scheme. Additionally, the proposed scheme with a proper value of attention timer T A yields significant power savings compared with the conventional mobility management scheme for Mobile IP-based wireless networks.
Network Cost
We analyze the network cost due to registration and paging messages using the steady state probabilities. When an MH is in the transmitting, receiving or attention/cell-connected state, the MH performs a cell-based registration. If an MH is in the attention/PA-connected or idle state, the MH performs a PA-based registration, and the system can page the MH if data packets are destined for the MH. If an MH is not switched off, it performs a BL-based registration.
Let ρ MH and A tot denote the density of MHs within a total area and the size of the total area, respectively. The rates of cell-based registration, PA-based registration, BL-based registration, and paging messages for the proposed scheme are expressed as: λ pag prop = ρ MH A tot (P 4 + P 5 + kP 6 )λ i N cell/PA (29) where N cell/PA is the number of cells in PA,N BL,i is the mean number of BL-based registrations during the mean sojourn time in state i, and k is the number of paging repetitions when the paging is unsuccessful. In Equation (27), λ PA is expressed with λ c and N cell/P A under the assumptions of a square shaped configuration of cell and PA, and a fluid flow mobility model of MHs [5], [22]. In Equation (29), λ pag prop is expressed with N cell/PA because an additional paging cost is incurred as different cells in PA transmit the same paging messages.
We consider that MHs move at an average speed of V i according to the environment type which consists of stationary user environment (i = 0), urban in-building environment (i = 1), urban pedestrian environment (i = 2), and urban vehicular environment (i = 3) [23][24][25]. It is considered that MHs move in directions which are uniformly distributed over [0, 2π], and the MHs are uniformly distributed with a density of ρ MH . The average number of MHs crossing out of the cell and the PA per unit time, r cell and r P A , respectively, are given by: where l cell and l P A are the length of the cell perimeter and the PA perimeter, respectively. The average rates of cell updates and PA updates of an MH are obtained by: Let r 1 be the time interval from the instant that the MH enters state i to the instant that the MH conducts the first BL-based registration in the state. Let f r 1 (t) and f T i (t) denote the density functions of the time interval r 1 and the sojourn time T i , respectively. Then, the mean number of BL-based registrationsN BL,i is obtained by: where F * r 1 (θ) and F * T i (θ) denote the Laplace transforms of f r 1 (t) and f T i (t), respectively. Equation (30) is evaluated using the Residue theorem [26]. We define a cost function as the weighted sum of the rates of cell-based registration, PA-based registration, BL-based registration, and paging messages for the proposed scheme as follows: where w cell , w P A , w BL , and w pag are weighting factors. If the registration interval and session holding time follow exponential distributions, an optimal value of the mean rate of BL-based registration λ * r that minimizes C tot prop is derived as: The optimal rate of BL-based registration is determined by incoming session rate, the duration that an MH remains switched-off, the number of paging repetitions, the number of cells in PA, the switch-off rate, and the weighting factors w pag and w BL . We can also derive an optimal value of the number of cells in PA N * cell/PA that minimizes C tot prop as: where the sum of steady state probabilities P 4 + P 5 is derived from Equations (19) and (20).
The optimal number of cells in PA is determined by incoming session rate, the cell-based registration rate, the number of paging repetitions, the weighting factors w PA and w pag , and the steady state probabilities P 4 + P 5 and P 6 .
Since the conventional MIP protocol [1][2] does not distinguish between active and idle MHs, the value of attention timer T A approaches ∞. The rates of cell-based registration, BL-based registration, and paging messages for the conventional scheme are obtained as: If an MH is in the off/attached state, the MH does not respond to paging for incoming sessions. We consider this unsuccessful paging in the conventional MIP protocol. We define the network cost for the conventional scheme as the weighted sum of the rates of cell-based registration, BL-based registration, and paging messages as follows: From Equations (26)(27)(28)(29)(30)(31)(32)(33) and (37-42), the relationship of the registration cost between the proposed and conventional schemes is expressed as C reg prop ≤ C reg conv if w cell = w P A . Since the conventional scheme considers only unsuccessful paging for the off/attached MHs, the relationship of the paging cost between the proposed and conventional schemes is obtained by C pag prop ≥ C pag conv .
Numerical Examples
The effects of various input parameters on the steady state probabilities and power consumption of the proposed and conventional schemes are investigated. The values of input parameters assumed for numerical examples are shown in Table 1 [14][15][16], [27][28][29]. Table 1. Input Parameters.
Parameter
Value Figure 3 shows the effect of session arrival rate λ s on the steady state probabilities. The session arrival rate λ s is expressed as λ s = λ i + λ o . The higher the session arrival rate, the higher the transition probabilities P 31 , P 41 , P 51 P 32 , P 42 , and P 52 due to the high rate of incoming or outgoing session arrivals. Therefore, as the value of λ s increases, the probability P 1 +P 2 increases, but the probability P 5 decreases. Figure 4 shows the effect of session arrival rate λ s on the power consumption for the proposed and conventional schemes. The power consumption is calculated as the energy consumption divided by time. The energy consumption for the proposed and conventional schemes is calculated using Equations (23) and (25), respectively. The power consumption for the proposed and conventional schemes increases as the value of λ s increases because it is more likely that MHs stay in the transmitting and receiving states as the value of λ s increases. The MH power consumption in the proposed scheme is approximately 0.1126 W, 0.1763 W and 0.4756 W at the session arrival rate λ s = 1(/h), λ s = 2(/h), and λ s = 10(/h), respectively. The MH power consumption for the conventional scheme is approximately 0.6521 W, 0.6625 W and 0.7119 W at λ s = 1(/h), λ s = 2(/h), and λ s = 10(/h), respectively. Thus, for 2 ≤ λ s ≤ 10, the power consumption of the proposed scheme is reduced by about 33.2% ∼ 73.4% compared with the conventional scheme. Furthermore, if the session arrival rate is low (λ s ≤ 1), the proposed scheme can save about 82.7% of the battery power consumption at the MH compared with the conventional scheme in Mobile IP-based wireless networks. Figure 5 shows the effect of switch-off rate λ off on the steady state probabilities. As the switch-off rate λ off increases, the probabilities P 6 and P 7 increase, but the probabilities P 1 , P 2 , P 3 , P 4 , and P 5 decrease. Because it is more likely that MHs stay in the off/attached state and detached state as the switch-off rate λ off increases. Figure 6 compares the power consumption of the proposed scheme with that of the conventional scheme for varying the values of λ off and µ off . When an MH is in the off/attached state or detached state, the MH does not consume power because the MH is powered off. As the switch-off rate λ off increases and the mean switch-off duration 1/µ off increases, the power consumption for the proposed and conventional schemes decreases. Figure 7 shows the effect of attention timer value T A on the power consumption of the proposed scheme for various values of BL-based registration rate λ r if the registration interval is fixed (solid line) or exponentially (dashed line) distributed. As the values of T A and λ r increase, power consumption of the proposed scheme increases because the probability P 5 that an MH stays in the idle state decreases. An MH battery's power is saved more as the value of attention timer T A decreases. However, the value of T A needs to be determined by considering the incoming session rate and the data session delay. The results show that the power consumption of the proposed scheme depends on the distribution of the registration intervals. The fixed registration intervals are better than the exponential registration intervals from the viewpoint of the power-saving effect of the proposed scheme. Figure 8 shows the effect of cell update rate λ c on the power consumption of the proposed scheme for various values of N cell/P A . As MH mobility increases, both the cell update rate λ c and the power consumption increases because the probability P 5 that the MH stays in the idle state decreases as the cell update rate λ c increases. As the number of cells in PA N cell/P A decreases, both the PA update rate λ P A and the power consumption increases. Table 2 [23][24][25]. Table 2. Input Parameters.
Parameter
Value Parameter Value Figure 10 shows the effect of N cell/P A on registration cost C reg prop , C reg conv , paging cost C pag prop , C pag conv , and total network cost C tot prop , C tot conv for the proposed and conventional schemes. The network costs for the conventional scheme are fixed with varying N cell/P A according to Equations (37-42). The numerical results show that C pag prop ≥ C pag conv and C reg prop ≤ C reg conv when w cell = w P A . For the proposed scheme, the optimal number of cells in PA N * cell/P A can exist. The result shows that total network cost with the input parameters is a minimum value at N * cell/P A = 7 which is consistent with Equation (35). We define the combined network and power cost for the proposed scheme C comb prop and that for the conventional scheme C comb conv as: where w net and w pow are weighting factors for network cost and power consumption, respectively. Figure 11 shows the combined network and power cost C comb prop for the proposed scheme with varying the values of N * cell/P A and w pow . As the weighting factor w pow increases as 1, 25, and 50 with fixed value of w net , the optimal number of cells in PA N * cell/P A increases as 7, 9, and 11 for the combined network and power cost. It is because the MH power consumption for the proposed scheme decreases as N cell/P A increases as shown in Figure 8. Figure 12 shows the effect of mean rate of BL-based registration λ r on the network cost of the proposed scheme which consists of cell update cost C cell prop , PA update cost C PA prop , BL-based registration cost C BL prop , and paging cost C pag prop . As the BL-based registration rate λ r increases, the BL-based registration cost increases and the paging cost decreases due to the frequent BL-based registrations. The rates of cell-based registration and PA-based registration are fixed with various values of λ r according to Equations (26) and (27). Figure 13 shows the effect of λ r on the total network cost C tot and the optimal BL-based registration rate λ * r for two types of distributions of registration intervals, exponential (dashed line) and fixed (solid line) distributions. The result shows that the distribution of the BL-based registration intervals affects the network cost. If the BL-based registration intervals follow an exponential distribution, the total network cost C tot is a minimum value of 10.98 at λ * r = 0.8 which is consistent with Equation (34). When the fixed BL-based registration intervals are utilized, the minimum network cost is 10.74 at λ * r = 0.8. Therefore, the fixed BL-based registration intervals are preferred over the exponential BL-based registration intervals since the fixed BL-based registration intervals yield a lower network cost in the proposed scheme for IP-based wireless networks. Figure 14 shows the combined network and power cost C comb prop for the proposed scheme with exponential BL-based registration intervals for various values of λ r and w pow . As shown in Figure 7, the power consumption of MHs for the proposed scheme increases as λ r increases. Hence, as the weighting factor w pow increases as 1, 10, and 20 with fixed value of w net , the optimal BL-based registration rate λ * r for the combined network and power cost decreases as 0.8, 0.7, and 0.6. Figure 13. The optimal binding-lifetime-based registration rate λ * r of the proposed scheme.
Conclusions
An energy-efficient mobility management scheme was proposed to reduce MH power consumption in IP-based wireless networks. The proposed scheme manages the following seven MH states, including transmitting, receiving, attention/cell-connected, attention/PA-connected, idle, off/attached, and detached states, to efficiently manage battery power, radio resources, and network load. The MH state transition behavior was modeled. We derived the stationary probabilities and steady state probabilities of the MH states for the proposed and conventional schemes in IP-based wireless networks in compact form. The effects of various input parameters on the MH steady state probabilities and power consumption were investigated in the proposed scheme compared to the conventional scheme with consideration of exponential and fixed distributions on interval of BL-based registration. The proposed scheme yielded significant power savings compared with the conventional mobility management scheme for Mobile IP based wireless networks. Network costs such as cell updates, PA updates, BL-based registrations, and paging messages were analyzed in the proposed and conventional schemes for IP-based wireless networks. The optimal values of PA size and registration interval were derived to minimize the network cost of the proposed scheme. The effects of various input parameters on the network cost were investigated. We also investigated the combined network and power cost with various weighting factors for the proposed and conventional schemes. These analytical results provide guidelines to select proper system parameters. The results can be utilized to analyze the performance of mobility management schemes in IP-based wireless networks.
|
v3-fos-license
|
2023-02-19T16:16:54.304Z
|
2023-02-16T00:00:00.000
|
257013882
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4441/15/4/783/pdf?version=1677133603",
"pdf_hash": "3a2677a9bf1ea554857619ac16add0538d25b410",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46565",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "4332ed1dc8d53213c9391e95fd6594183767070f",
"year": 2023
}
|
pes2o/s2orc
|
Multi-Objective Lower Irrigation Limit Simulation and Optimization Model for Lycium barbarum Based on NSGA-III and ANN
: Lycium barbarum has rich medicinal value and is an important medicinal and economic tree species in China, with an annual output value of 21 billion RMB. The yield and the quality of Lycium barbarum dry fruit are the crucial issues that affect the cultivation of Lycium barbarum and the income of farmers in the Ningxia water shortage area. According to the local acquisition standard of Lycium barbarum , the amount of dry fruit per 50 g (ADF-50) is the key factor in evaluating the quality and determining the purchase price. In order to optimize the irrigation lower limit of automatic drip irrigation system with multiple objectives, the yield and ADF-50 are selected to be optimal objectives. The lower irrigation limits of the automatic drip irrigation system in the full flowering stage, the summer fruiting stage, and the early autumn fruiting stage are optimized by the third generation of non-dominated sorting genetic algorithm (NSGA-III) in this paper. The mathematical relationships between irrigation lower limit and irrigation quantity, irrigation amount, yield, and ADF-50 were established by the water balance model, water production function (WPF), and artificial neural network model (ANN), respectively. The accuracy of the water balance model and ANN were verified by experiments. The experiments and optimization results show that: (1) irrigation quantity and ADF-50 calculated by the water balance model and ANN are accurate, and their Nash– Sutcliffe coefficient are 0.83 and 0.66; (2) In a certain range of irrigation quantity, ADF-50 and Lycium barbarum yield show competitive relation. By solving the NSGA-III optimization model, the lower irrigation limits schemes, which tend to different objectives, and a compromise scheme can be obtained; (3) Compared with the original lower limit of irrigation water, the compromise scheme’s yield and quality of Lycium barbarum are improved 10.7% and 8.8% respectively. The results show that the automatic drip irrigation system’s lower irrigation limit scheme optimized by the model can improve not only the yield but also the quality of Lycium barbarum . This provides a new idea for establishing the irrigation lower limit of the automatic drip irrigation system in the Lycium barbarum planting area.
Introduction
Lycium barbarum is an important commercial crop and medicinal food herb in the Ningxia autonomous region of China and has high medicinal value.Adding Lycium barbarum to a regular diet can effectively nourish one's liver and kidneys [1][2][3].Ningxia autonomous region is a continental arid climate, and its annual precipitation does not exceed 400 mm.The shortage of irrigation water resources has become the main reason that limits the development of the Lycium barbarum industry.Under limited water resources, the optimization of the irrigation system for Lycium barbarum can further reduce irrigation losses while maintaining the yield and quality at the same time, which is of great significance to arid areas in western China.A large number of scholars have studied the relationship between the irrigation system and the yield and quality of Lycium barbarum [4][5][6][7], but most of the existing research chose the better irrigation scheduling by comparing the different irrigation treatments in plot experiments, and these were mostly conducted by manual control irrigation [8][9][10].Using the schemes comparison to optimize the irrigation system has the following two disadvantages: (1) The experiment period is too long; (2) When there are too many factors to be optimized in the irrigation scheduling, it is bound to cause a large gap between treatments and affect the optimization accuracy.With the improvement of automation technology, there are an increasing number of Lycium barbarum plantations changing to use automated irrigation systems.However, there is little research on the optimization of automatic irrigation systems.
In recent years, more scholars have adopted the optimization-simulation coupling model to optimize the research object [11][12][13][14][15].The optimization-simulation coupling model can improve not only the optimization efficiency but also the accuracy in a short time through a large amount of experiment simulation [16,17].With the increasing number of objectives in optimization problems, more researchers use the non-dominated sorting genetic algorithm (NSGA) as an optimization model in a coupling model, which can deal with multiple competing objectives well.For example, Liu et al. used NSGA to optimize irrigation scheduling under different precipitation and evaporation conditions with the aim of maximizing water production efficiency and yield [18]; In order to solve the contradiction between energy consumption and crop yield of the pressurized irrigation network, M.T.Carrillo Cobo adopted NSGA to optimize the irrigation pattern of pressurized irrigation network [19].But so far, there is no study on the optimization of the lower limit of automatic drip irrigation of Lycium barbarum with NSGA.The third generation of the NSGA algorithm is introduced into the lower irrigation limit optimization problem of automatic drip irrigation of Lycium barbarum to solve this multi-objective optimization problem more efficiently.However, the use of the simulation-optimization coupling model to optimize the irrigation system of Lycium barbarum needs to establish the mathematical relationship between irrigation quantity, yield, and quality.But the optimization object in this study is the lower irrigation limit of the automatic drip irrigation system, so we need to establish the mathematical relationship between lower irrigation limit and quantity.A water balance model is a good choice.The water balance model has characteristics of convenient calculation and needs fewer correlation parameters.Liu et al. established the water balance model to simulate water transport in an irrigation area and got accurate simulation results [20].So, a Lycium barbarum active root layer water balance model was established to calculate irrigation amount according to the irrigation upper and lower limits and water content.
Besides, most current studies established the water production function of crops, including Lycium barbarum [21][22][23], but there was no research on the mathematical relationship between Lycium barbarum quality and irrigation scheduling.In this study, the quality of Lycium barbarum was measured by ADF-50, but the influence mechanism of irrigation amount on ADF-50 remains unclear.With the development of artificial intelligence in recent years, Artificial Neural Networks (ANN) are widely used in the construction of various prediction models or simulation models.For example, Kasiviswanathan et al. developed an ANN to forecast the weekly reservoir inflows along with its uncertainty, and the prediction precision is good [24]; Saber et al. created an accurate and reliable ANN model for irrigation parameters to predicate irrigation water quality [25].When using the ANN model to construct the underlying mathematical relationship between parameters, it is not necessary to define the physical relationship between parameters, and compared with the widely used method of solving the functional relationship through the least square method, the neural network model does not need to clarify the functional form of the mathematical relationship in advance [26,27].Therefore, in this study, the neural network model is used to establish the mathematical relationship between ADF-50 and irrigation quantity.
After all the simulation models have been established, the ADF-50 neural network model, the water content simulation model of active root layer, and the soil water production function were embedded into the genetic algorithm as the objective functions.The smaller the ADF-50, the quality of Lycium berry is better.Therefore, the maximum yield and the minimum ADF-50 were taken as the objectives, and the lower irrigation limit was taken as the decision variable.A simulation-optimization coupling model based on a neural network and multi-objective genetic algorithm model NSGA-III was used to optimize the irrigation limit of the automatic drip irrigation system in the Lycium barbarum planting area.Previous studies have shown that the maximum yield of Lycium barbarum and the minimum ADF-50 targets cannot be reached at the same time [28], and there are few studies on automated drip irrigation schemes that can balance the yield and quality of Lycium barbarum.Therefore, this study aims to establish a reasonable irrigation lower limit scheme which tends to different objectives for the automated drip irrigation system of Lycium barbarum by the NSGA-III algorithm.In addition, we obtain a compromise scheme by assigning equal weight to different objective function values.
Overview of the Study Area
The experiment was conducted from March 2018 to October 2019 in Ningxia Zhongwei Jiusheng Agricultural Park (105°06′ E 37°27′ N) at the intersection of Ningxia, Inner Mongolia, and Gansu provinces in the middle and upper reaches of the Yellow River.Its altitude is 1231 m, and this region is a typical temperate continental monsoon climateperennial droughty, less rainy, adequate sunshine, and widely varies temperature from day to night.The effective annual precipitation was about 147 mm, mainly in July and August, accounting for 57.23% of the year.The annual average temperature was 9.2 °C.The annual average sunshine duration was 2728.0 h, and the annual evaporation was 1921 mm.The frost-free period was about 150 days.The wind speed was about 2-6 m/s, and the frozen depth of soil in the park was about 1 m.The soil in the park is sandy loam with porosity of 47% and field water capacity of 19.8%.In this study, 10 sites were selected (Figure 1).
Lycium Barbarum's Active Root Layer Water Balance Model
Although the water balance model cannot reflect the groundwater migration process, it is widely used because of its high calculation efficiency.So, the principle of water balance was used to simulate the change process of water content in the active layer of Lycium barbarum's root in this study.The field water capacity was taken as the initial water content of the active root layer system.The water balance method was used to calculate the water content of the active root layer system day by day, and the calculation formula was shown in Equation (1).When the water content was lower than the lower limit of irrigation, irrigation times and irrigation quota of the whole growth period of Lycium barbarum were recorded.
where, and are the water content of active root layer per unit area on day i and day i-1, respectively; h is the thickness of active root layer (mm).Pi is the rainfall on day i (mm).Ii is the irrigation volume (mm) on day i; is soil moisture coefficient; is the reference crop exfoliation (mm), the Penman-Monteith model recommended by the Food and Agriculture Organization of the United Nations (FAO) used to calculate the reference crop exfoliation in this study; is the sum of groundwater leakage and recharge in the active root layer (mm).
Because drip irrigation was used in this study and the buried depth of the groundwater level was 20 m, groundwater leakage and recharge are ignored in this model.In addition, in order to simplify the simulation process, when the water content in the active root layer exceeds the water content in the field due to rainfall, the model considers that the excess water will be discharged on the same day, and the water content in the active root layer at this time is the water content in the field.In this study, the Nash-Sutcliffe coefficient was used to assess the water balance model (Equation ( 2)).
where, is experiment results; is simulation results; is the average of experiment results.
The closer the Nash-Sutcliffe coefficient is to 1, the better the simulation effect is.
Water Production Function of Lycium barbarum
The water production function of Lycium barbarum is an expression to describe the mathematical relationship between the yield of Lycium barbarum and the amount of irrigation water.The expression of the water production function is shown in Equation (3) [29], and the correlation coefficient r = 0.9547.
where Y is yield; x is the irrigation water amount in the whole growth period of Lycium barbarum.
When the irrigation quantity in the whole growth period of Lycium barbarum was calculated by the water balance model, the irrigation quantity was brought into expression (2) to calculate the yield.
ADF-50 Artificial Neural Network Model
The artificial neural network model is a mathematical model that simulates the structure of the neural network in the brain.It consists of an input layer, a hidden layer, and an output layer (Figure 2).After being trained by historical data, a neural network with a specific mathematical relationship between the input value and output value is established.In this study, back propagation artificial neural network (BP-ANN) was used to construct the mathematical relationship between irrigation quantity and ADF-50; BP-ANN is a multilayer feedforward neural network trained according to an error-backward propagation algorithm.The connection weight between each hidden layer was adjusted according to the error between the measured value and the output value until the error was less than the allowed value.The training steps of the BP-ANN were as follows: Step 1: training sample expansion.A large number of training samples are needed to ensure the accuracy of the BP-ANN model.But the amount of ADF-50 data was insufficient, so in addition to the data obtained from the experiment, the farmers of Lycium barbarum with automatic drip irrigation systems and planted with the same variety and tree age were investigated.A total of 50 groups of experimental data were obtained.10 of the data groups were used to verify the BP-ANN's accuracy.In order to further make up for the effect of lacking training data, the remaining 40 groups data were expanded [30], and 400 data samples were generated.
Step 2: initialize the neural network.The input value was the irrigation amount of Lycium berry in the full flowering stage (T1), summer fruit stage (T2), and early autumn fruit stage (T3), and the output value was ADF-50.Two hidden layers were set, and the number of nodes in the input layer, hidden layers, and output layer were N = 3, L = 7, and M = 1, respectively.The initial weights Wih, Whh, and Who of each layer were set as 0.5, the learning rate was 0.1, the initial thresholds a and b for the hidden and output layers was 0.3, and the sigmoid function was selected as the activation function.The expression is shown in Equation (4).
Step 3: calculate the hidden layer's output.The output was calculated by input variables X, weights Wih, Whj, and the threshold ai.The calculation formulas are shown in Equations ( 5) and (6).
Step 4: calculate the output layer's output value of the output layer.Based on the output value of the No. 2 hidden layer h2j, the connection weights Wjo and the threshold, the output value PADF-50 was calculated.The calculation formula is shown in Equation (7).
Step 5: calculate the error between the ADF-50 measured value and the neural network calculated value, and the calculation formula is shown in Equation ( 8).
Step 6: update weights and thresholds.According to error e, the weights and thresholds were updated, and the formulas are shown in Equations ( 9)- (14).The Nash-Sutcliffe coefficient was also used to assess the ADF-50 artificial neural network model.
Multi-Objective Genetic Algorithm Optimization Model NSGA-III
In this study, the multi-objective genetic algorithm optimization model NSGA-III was used to optimize the lower irrigation limits of the automated drip irrigation system in the Lycium barbarum field.The model optimized the decision variables by simulating the biological evolution process, and the optimization was an iterative process in essence.The combination of decision variables to be optimized was regarded as biological individuals.When the iteration started, the population size was set, and the decision variables were initialized to form the first-generation population.By simulating the evolution process of the biology, the evaluation of each individual was calculated by the objective function; individuals that had better evaluations were selected for crossover and mutation operation so as to generate a new generation of population.Each iteration generated a new generation until the evaluation value reached the preset convergence standard.
The parameters of the NSGA-III optimization model were set as follows: population size N = 200, iterations G = 100, binary crossover and mutation were used, and the crossover and mutation probability were 0.6 and 0.01, respectively.If there was competition between the objective functions, it often could not get a single global optimal solution when the NSGA-III was used to solve the multi-objective problem, but a non-dominated set included solutions that tended to different targets.These solutions performed better on their preferred goals while ensuring that the other goals were not too bad.
Decision Variable
Three decision variables were the irrigation lower limits of automatic drip irrigation in three growth stages: full flowering, summer fruit, and early autumn fruit stage.The setting of decision variable parameters is shown in Equation (15).
where: Li is the irrigation lower limits of automatic drip irrigation in three growth stages.
Objective Function
This study included two objective functions: maximum yield and minimum ADF-50, as shown in Equation (16).
where: is the mathematical relationship model between irrigation volume and ADF-50 constructed by a neural network; is the irrigation amount of Lycium Barbarum at each growth stage, which is respectively the irrigation amount at the spring shoot stage, full flowering stage, summer fruit stage, early autumn fruit stage, and late autumn fruit stage.
Constraint
Considering the actual irrigation demand, the lower limit of automatic drip irrigation should not be lower than the wilting coefficient and higher than the upper limit of irrigation water.When rainfall occurs, the water content of the active root layer may exceed the field water capacity.However, in order to simplify the water balance simulation process, it was assumed that the water exceeding the field water capacity was discharged from the active root layer on the same day, and the water content of the active root layer did not exceed the field water capacity.The constraint conditions are shown in Equation (17).
where: is field water capacity.In this study, the upper limit of irrigation was set at 95% ; is wilting coefficient; is the water content of the active root layer, is the lower limit of drip irrigation water in the optimized three growth stages.
Construction of the Coupling Model
The NSGA-III optimization model calculated ADF-50 and yield by transferring the ADF-50 ANN model, water production function, and water balance model of the Lycium barbarum active root layer.Before the optimization, the ADF-50 ANN model needed to be trained.The neural network model updated the threshold and weight by reading the training samples until the error between the calculated result and actual result was less than the allowed value.The potential mathematical relationship between ADF-50 and irrigation volume could be established through the trained thresholds and weights.After the neural network model was trained, NSGA-III could be used to optimize the lower limit of drip irrigation.The optimization process was a cyclic process.When the cycle started, the population size and iterations were set, and the decision variables were initialized to form the first-generation population.Each feasible solution (lower limit of drip irrigation water) in the population was brought into the water balance model of Lycium barbarum active root layer to calculate the irrigation amount, and then the irrigation amount was input into the neural network model and water production function to calculate the ADF-50 and yield.The feasible solutions were non-dominated and sorted according to the yield and ADF-50.The individuals at the top of the sort were selected to cross over and mutate to generate new individuals, which were merged with the previous generation's population to obtain a new population.The above steps were repeated until the maximum number of iterations or convergence conditions were reached.The construction flow chart of the coupling model is shown in Figure 3.
Simulation Models Validation
The results of the water balance model and ADF-50 neural network model in the Lycium barbarum active root layer were verified.The experimental data (lower limit of drip irrigation water at three stages) that were not used as the neural network training samples were input into the water balance model to calculate the irrigation quantity.The measured and simulated values are shown in Figure 4. Then input the irrigation quantity to the ADF-50 neural network model to calculate the ADF-50.The calculation results and the measured values are shown in Figure 5.The Nash-Sutcliffe coefficient of the water balance model and ADF-50 neural network model were 0.83 and 0.66.The nearer to 1 the Nash-Sutcliffe coefficient approaches, the higher the simulation accuracy will be; if it is much less than 0, the simulation results of the model are unreliable.Although ADF-50 neural network model had a slightly lower Nash-Sutcliffe coefficient, the simulation accuracy of these two models was acceptable.The verification results of simulation models showed that the water balance model could simulate the water content of the active root layer and calculate irrigation quantity well.At the same time, ADF-50 neural network model could calculate ADF-50 accurately.
Optimization Results of Irrigation Drip Lower Irrigation Limit for Lycium barbarum
The optimization purpose of this study was to obtain the lower drip irrigation limit schemes that tended to different objectives or compromise between the two objectives.The simulation and optimization coupling model was used under the conditions of precipitation and evaporation in 2018 and 2019, respectively.Each objective function value of the lower irrigation limit scheme of drip irrigation was counted (Figures 6-8).In these figures, the larger the dot was, the scheme performed better in this objective.In order to select the scheme that had compromised performance, the sum of two objective function values was used as the scheme's score after being given the same weight and normalization.When selecting the scheme based on the score, the lower irrigation limits at three growth stages were about 65%, 50%, and 65% (Percentage of field capacity), respectively.The maximum yield of Lycium barbarum was achieved when the lower limits were about 70%, 50%, and 70% in full flowering, summer fruit, and early autumn fruit stage, respectively.The lower irrigation limits in the full flowering and early autumn fruit stage had little effect on ADF-50, while the most favorable lower irrigation limit at the summer fruit stage was about 50%.Three typical irrigation schedulings that tended to different goals were selected from the non-dominated set (Table 1), in which the S1 scheme was the compromise scheme that balances the two goals, the S2 scheme was the scheme that tended to yield, and the S3 scheme was the scheme that tended to ADF-50.The irrigation times and irrigation quantity of the lower irrigation limit schemes that tended to different objectives were statistically analyzed (Figure 9).The irrigation times from more to less in proper order were S2, original scheme, S1, and S3.The trend of the four schemes' total irrigation water volume was consistent with irrigation times.In 2018, the irrigation time and quantity of S1 were 21 and 1945.06 m 3 /hm 2 ; S2 were 25 and 2628.06 m 3 /hm 2 ; S3 were 13 and 1648.10 m 3 /hm 2 ; original scheme were 22 and 2405.34 m 3 /hm 2 .In 2019, the irrigation time and quantity of S1 were 23 and 2152.93 m 3 /hm 2 ; S2 were 28 and 2910.16m 3 /hm 2 ; S3 were 14 and 1781.73 m 3 /hm 2 ; original scheme were 25 and 2687.45m 3 /hm 2 .Based on the analysis of the lower limit of irrigation water in Table 1, it could be seen that the irrigation times and the total quantity of irrigation water were positively correlated with the lower limit of irrigation water.In addition, within the constraint range, the yield and ADF-50 showed an increasing trend with the increase of the total irrigation quantity, indicating that increasing the lower limit of irrigation water and increasing the irrigation water could improve the yield but reduce the quality of Lycium barbarum.It is worth noting that the larger the ADF-50, the worse the quality.Compared with the original lower limit scheme of drip irrigation, the S1 scheme with compromise consideration of the two objectives could increase the yield by 10.7% while reducing the ADF-50 by 8.8% and improving the quality of Lycium barbarum while increasing the yield.The S2 increased yield by 32.5% at the cost of increasing ADF-50 by 4.6%.In contrast, the S3 decreased ADF-50 by 26.8% at the cost of decreasing yield by 8.0%.In conclusion, these results demonstrate that the simulation models can be used as objective functions in the NSGA-III optimization model because of their high accuracy.Further, within the lower limits of irrigation constraints, the two objectives have a competitive relationship.Optimizing one objective is at the expense of the other.Farmers can choose different schemes according to their preferences to maximize the yield or make the quality better, or they can choose a compromise scheme.
Discussion
In order to optimize the lower irrigation limit automatic drip irrigation system in the Lycium barbarum field and handle the competition relationship between yield and quality, we established a simulation-optimization model for Lycium barbarum automatic drip lower irrigation limit, which was based on the principle of water balance, the neural network, and the NSGA-III.The validity and rationality of the model were verified by two years of experiments.The simulation models' accuracy was verified by the Nash-Sutcliffe Single irrigation amount coefficient.The result showed that the water balance model had a high Nash-Sutcliffe coefficient, indicating this model's accuracy was good.But the ADF-50 BP-ANN model had a slightly lower Nash-Sutcliffe coefficient.According to the previous studies, the reason for this problem is the model's limited training samples [31,32].However, it will make the expanded data tend to the mean value if the small sample expansion method is further used and make the new data lack the potential features of the original data, thus resulting in the generation of wrong samples [33].Therefore, the accuracy of the ADF-50 BP-ANN model can only be improved by the collection of subsequent experimental data.The model's optimization purposes were the maximum yield and the minimum ADF-50, which was an important evaluation criterion of Lycium barbarum quality.The non-dominated set of the optimization problem was obtained by using the multi-objective genetic algorithm NSGA-III, which included the lower limit of drip irrigation that tended to different objectives.The Lycium barbarum active root layer water balance model was used to calculate the irrigation quantity under different lower limits of drip irrigation.As Table 1 and Figure 9 show, irrigation times and total irrigation quantity were proportional to the irrigation lower limit, and the lower irrigation limit of scheme S2 in T1 and T3 was 10% and 5% field capacity higher than that of S3, respectively, resulted in an average increase of 13 irrigation times and 1054.19 m 3 /hm 2 total irrigation quantity (61.5%) in the two years.Li got the same conclusion when studying the effect of irrigation limits on the water production efficiency of tomatoes [34].And it is similar to the viewpoint that the frequency of drip irrigation is positively correlated with the lower irrigation limit, which was found by Hou when he was studying the water and heat distribution of Lycium barbarum orchard's soil [35].By observing the yield and ADF-50 value of different schemes, the yield and ADF-50 increased along with the increase of total irrigation water.Similar conclusions were also obtained in other studies on Lycium barbarum in the same area [36].Compared with S3, S2 increased yield by 44.2% and ADF-50 by 43.9% (The smaller ADF-50, the better quality of Lycium barbarum), further verifying the competitive relationship between yield and ADF-50 objectives.It can be seen that the simulation-optimization model selected the scheme which tended to yield objectives with higher irrigation lower limits so as to increase the irrigation quantity and yield; When selecting the scheme which tended to the quality of Lycium barbarum, the scheme with lower irrigation limit was a priority, which reduced the total irrigation quantity, yield, and value of ADF-50.When the simulation-optimization model selected a scheme that tended to one objective, it also considered this scheme's performance of the other objective.Compared with the original scheme, the scheme which tended to yield objective increased the yield by 32.6% on average during the two years of the simulation experiment but only increased the ADF-50 by 4.6%, and the quality of Lycium barbarum was only slightly reduced.The scheme which tended to the ADF-50 objective reduced the yield by 8.1% while reducing ADF-50 by 26.8%.The results show that NSGA-III can deal with the relationship between multiple competing objectives well.Liu and Hou also get the same conclusion when using NSGA-III to solve the multi-objective optimization problem [37,38].S1 was a compromise scheme, which meant a compromise between the two objectives.Therefore, the lower limits of irrigation of this scheme were between S2 and S3, which were 65%, 50%, and 65% of field capacity at T1, T2, and T3.This is similar to the Lycium barbarum's lower irrigation limit scheme, which Xu selected by manual comparison (65%, 65%, and 55% of field capacity for the three stages) [39].Irrigation times, the total amount of irrigation water, yield, and ADF-50 of S1 are all between S2 and S3.Compared with the original scheme, the yield of S1 increased by 10.7%, and ADF-50 decreased by 8.8%, which indicates that the simulation-optimization model can effectively improve the yield and quality of crops.Numerous scholars reached a similar conclusion when they used the simulation-optimization model to optimize the allocation of irrigation water [40][41][42].
In future research, we will continue to collect the irrigation quantity and ADF-50 data so as to expand the training samples of the ADF-50 BP-ANN model and improve the model's accuracy.In addition, we will try to combine the ModelFlow model, which can simulate the groundwater flow process [43] into the simulation-optimization model so as to improve the simulation accuracy of soil water content in the Lycium barbarum planting area.
Conclusions
After two years of experimental simulation, the water balance model and the ADF-50 BP-ANN model can accurately simulate the change of soil water content and the mathematical relationship between ADF-50 and irrigation quantity in the Lycium barbarum planting area.The optimization model based on the NSGA-III algorithm can deal with the competition between the quality and yield of Lycium barbarum well and get the lower limit scheme of automatic drip irrigation, which tends to different objectives or compromises between two objectives.The findings of this study will effectively contribute to the planning of Lycium barbarum's automatic drip irrigation lower limit, and they are also very useful to other Lycium barbarum plant areas having similar weather conditions.
Figure 1 .
Figure 1.Study area location and experimental sites within PR China.
Figure 3 .
Figure 3. Construction flow chart of the coupling model.
Figure 4 .
Figure 4. Verification of water balance model in Lycium barbarum active root layer.
Figure 6 .Figure 7 .Figure 8 .
Figure 6.Scatterplot of relationship between lower irrigation limit and score at different growth stages (LIL means lower irrigation limit).
Figure 9 .
Figure 9. Irrigation distribution of different LIL solutions.
Table 1 .
Functions' values for the 3 treatments.
|
v3-fos-license
|
2018-12-12T22:55:49.358Z
|
2012-10-19T00:00:00.000
|
56126925
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://sajems.org/index.php/sajems/article/download/460/179",
"pdf_hash": "1d66b6902c6cebefdf87259f2e8385b7f4c2f280",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46569",
"s2fieldsofstudy": [
"Economics",
"Law",
"Business"
],
"sha1": "f992133c889cbd9d6e346cda06eb7d0cd7aff2a1",
"year": 2012
}
|
pes2o/s2orc
|
Measuring excessive pricing as an abuse of doMinance – an assessMent of the criteria used in the harMony gold / Mittal steel coMplaint
The Competition Tribunal recently found Mittal Steel SA guilty of abusing its super-dominant position by charging excessive prices to the detriment of consumers of flat carbon steel products. This article assesses the economic tests to be used for excessive pricing in light of the case and reviews the lessons that can be learned from the evidence required for the different tests. It discusses issues related to using profitability as a test and points out problems and pitfalls in profitability measures.
Introduction
Section 8 (a) of the South African Competition Act (Act 89 of 1998 as amended) prohibits a dominant firm from charging an excessive price to the detriment of consumers.Under Chapter 1, such a price is defined as one that bears no reasonable relation to the economic value of a good or service and one that is higher than this value.The Tribunal heard its first excessive pricing complaint in 2006, which was brought against Mittal Steel SA by mining companies Harmony Gold Mining and Durban Roodepoort Deep 2 .
This article looks at the broad economic issues surrounding excessive pricing as an abuse of dominance in an industrial organisation framework.It seeks to draw key insights from the approaches taken by the parties and the Tribunal in the case against Mittal Steel SA to arrive at the appropriate and relevant tests to assess excessive pricing.It looks at the rationale behind Mittal's pricing behaviour, how it implemented the pricing system, and why this is indicative of the unilateral exertion of market power.Some of the tools available to assess excessive prices prescribed by international case law are evaluated and examples of how these were applied in the analysis of Mittal's domestic prices for flat steel products, including the difficulties and pitfalls of using these tools, are considered.
The contrasting theoretical approaches to excessive pricing
To assess excessive pricing as a contravention of the Competition Act and to identify the appropriate tests that can be used to show this, it is helpful to understand the different theoretical positions in the debate.
The structure-conduct-performance paradigm (SCP) or the Harvard ('structuralist') approach in industrial organisation posits that the performance of an industry is a function of the conduct of its market players, which in turn is a function of the industry's structure (Martin, 1993: 3).Under this paradigm, firms in highly concentrated industries facing limited or no competition can charge prices that are well above those in less concentrated markets.In other words, structure allows for conduct and determines outcomes such as pricing levels.
By comparison, the efficiency paradigm, or the Chicago School approach, maintains that the most efficient and low cost firms gain market share and earn higher profits due to this superiority (Demsetz, 1973a, 1973b and 1974; see Leach, 1992: 143).As a consequence, concentration increases as these firms gain market share due to their competitive edge over others.This therefore is the exact reverse causality to that of the structuralists, that is, performance and competitive conduct leads to a structure that may be concentrated.Prices which may appear high relative to costs and are being set by a dominant firm are, in this interpretation, the reward to a low cost, efficient firm.While this perspective suggests competition authorities should not generally be concerned with pricing that may appear to be excessive, it ultimately still rests on the nature of the market and firm conduct in question.
While the rift between the two schools of thought amongst academics is obvious in South Africa (as discussed in Reekie, 1999: 269), the framework used by the South African competition authorities in the past has broadly been the SCP framework (Theron, 2001: 620).
An important concern is around the ease of entry.If we emphasise new entry as the source of competitive discipline then high prices are a means of rewarding firms for risk-taking and innovation, and provide an incentive to develop superior products and services.It is therefore accepted that under normal competitive conditions, with low barriers to entry and contestable markets, "excessive" prices pose little concern as they are the stimulus for new entry to occur.However, under high barriers to entry and conditions of imperfect competition, there may not be sufficient rivalry to undermine supra-competitive prices over time.
Indeed, certain international authorities recognise that excessive prices are a concern in circumstances where entry barriers are high resulting in diminished effective rivalry (Monti, 2006: 7).The EC competition authorities believe that there are significant and long-term market failures that prevent the market from working effectively and that imperfect competition and high barriers to entry are widespread.This is especially so in the case of monopolies that attained entrenched dominant positions through current or past state support or legal rights and which operate in incontestable markets with little or no effective rivalry.Under such conditions, it is possible that firms charge prices that are above what is considered a product's economic value.The EC directly prohibits imposing "unfair purchase or selling prices" on customers and asserts that there is some fair price that serves to redistribute wealth and power (Evans & Padilla, 2005: 98).In the EC, and in South Africa, every dominant firm has a special obligation not to set excessive prices, regardless of how it attained its dominance.A firm is considered dominant in South African law if it can control prices, or exclude competition, or behave to an appreciable extent independently of its competitors, customers or suppliers.However, US competition law does not oppose monopoly pricing per se and takes the Chicago view that markets work best untampered by regulators and that markets are generally highly contestable with high prices encouraging new entry (Gal, 2004: 345, 346).
Under the South African Act, the critical question is framed in terms of the price relative to the "economic value" of the good or service, with economic value left to the authorities to determine.Economic theory would suggest that a perfectly competitive price in a static framework is one that approximates marginal costs.In such instances, markets outcomes are efficient and welfare to society is maximised, thus the perfectly competitive price could be taken as representing "economic value".Under this framework, any deviation from this optimal situation is welfare-reducing.However, in reality the conditions for perfect competition rarely hold, making it difficult to assess pricing in terms of deviations from this benchmark.Arriving at the appropriate competitive benchmark is especially difficult in dynamic industries which are highly innovative and make large investments including in Research and Development (R&D) (Evans & Padilla, 2005: 101).
This implies that appropriate tests for excessive pricing, and the likelihood of errors of overenforcement (type 1 errors) against underenforcement (type 2 errors), have to be evaluated in the context of the industries in question.For example, simple price-cost margin tests could lead to over-enforcement where they have a chilling effect on investment incentives, as potential investors fear prosecution if their investments serve to greatly reduce cost of production.This type of error is more likely in dynamic industries where firms compete for the sale of new or improved products and services, and where entry barriers are low.The cost to society is the loss to consumers as introduction of a valuable good or service is discouraged.Type 2 errors result in supra-competitive prices persisting, which lead to loss of consumer welfare either by consumers paying more than the competitive price or by being excluded altogether.
Motta and de Streel (2006: 91) explain that excessive prices may be an exploitative abuse of market power or an exclusionary abuse aimed at strengthening or maintaining market power of the dominant firm.Similar to Evans and Padilla, Motta and de Streel caution against anti-trust intervention in cases of excessive prices except under very specific conditions of high and non-transitory barriers to entry leading to a super-dominant position, and when this super-dominant position is due to current or past exclusive rights or un-condemned past anticompetitive practices.
After the industry conditions have been taken into account, several different methods could be used to assess excessive pricing.These methods include comparing prices to costs and various other benchmarks, such as prices in more competitive markets, or assessing the profitability of the product in question.Further, the different methods can be useful in directing competition authorities to find the most efficient remedy.The problems and pitfalls in the different approaches are discussed below and illustrated using the case study.
In the case brought by Harmony and DRD of excessive pricing by Mittal Steel SA, the focus was on whether the conduct was consistent with the unilateral exertion of durable market power as compared with what would be expected under effective competition (rather than perfect competition).Under effective competitive rivalry prices would be expected to be reflective of factors including production costs, and hence, of economic value.
Given that Mittal faces no direct effective rivals locally, the complainants scrutinised the pricing system in place that leads to the prices charged to most local customers and compared this against pricing in the few markets that Mittal faces some (although limited) competitive discipline.As in international excessive pricing cases, the complainants utilised a range of benchmarks, as discussed below, to show what prices would tend to under some level of effective competition.
The defendant's main approach to the relevant test for excessive pricing was a profitability analysis.The Tribunal rejected both profitability and a cost-based approach to measuring economic value, and instead emphasised the structural conditions coupled with ancillary conduct that prevents the effective functioning of the market 3 .
This meant that the various sides laid differing emphasis on structure, conduct and performance, as we now discuss in more detail.
The different tests for assessing excessive pricing -a case study of the South African flat steel market
In drawing on the case study, I start with structural features, especially barriers to entry and barriers to importing.The conduct is then examined in some detail before assessing the merits of profitability as a measure of performance.
Barriers to entry: Supply of flat steel to the South African market
There is consensus that barriers to entry are an important starting point.If entry is relatively easy then market power cannot be sustained.In this case, there are significant non-transitory barriers to entry.In addition, the incumbent dominant firm in the South Africa flat steel industry, Mittal Steel SA (formerly Iscor), does not owe its position to superior efficiency, cost-savings or innovation but to previous state ownership and continued state support postprivatisation.
• Extensive state support
The basis for a company's dominant position and the relevance of state support and protection in this regard is a very important consideration if the excessive pricing provision is not to risk wrongly penalising firms who have merely been successful in competing on their merits (Fingleton, 2006: 65) Mittal was a state-owned and operated entity.Even after it was privatised in 1989, Mittal continued to receive significant state support throughout the nineties.It received incentives through the General Export Incentive Scheme and through accelerated depreciation tax allowances.It also received support under the Regional Industrial Development Scheme (which became the Small and Medium Manufacturing Development Programme) as well as a tax write-off under the Strategic Investment Programme.The Industrial Development Corporation further provided financial assistance to Mittal.Such extensive state support post privatisation, although no longer in the form of a legal barrier, allowed Mittal, and not other steel-manufacturing firms to become the dominant player in the SA steel industry.
• Economies of scale
Given that steel production involves large economies of scale it is reasonable then to expect a concentrated market as minimum efficient scale is reached with only a few firms in the market.This is the case in the SA steel industry.Mittal is the dominant player in the flat steel market (of which hot rolled coil is a main product), producing around 80 per cent of flat steel in the local market (the flat steel industry's local sales in 2007 amounted to 2.8 million tonnes) 5 .Highveld Steel, as the only other local producer, accounts for the remainder, aside from small volumes of imports, and it also focuses on a particular product range.
• High transport costs
Imports are limited, given the high cost of transporting steel over large distances.Mittal finds itself in a naturally protected market.Large distances from international markets and rising shipping costs add a significant margin of transport costs onto a relatively low value-heavy weight product.Shipping and all associated transport costs could constitute more than 40 per cent of the cost of product imported into South Africa 6 .Prior to April 2006, there was also a 5 per cent duty on imported steel, which further raised the import cost.This duty has since been abolished.High transport and associated costs nonetheless significantly limit the competitiveness of steel imports.
• Input cost advantage
SA has numerous advantages in the production of steel and faces low input costs.The basic inputs needed are iron ore, electricity, coking coal, natural gas and labour.SA has abundant and good-quality iron-ore deposits, relatively cheap labour and cheap electricity costs, while only coking coal is imported.Mittal achieves significant absolute cost advantages for iron ore from being backwardly integrated with its source -Kumba.A long-term preferential agreement was struck at the time of Kumba's unbundling allowing Mittal access to very cheap iron ore at cost plus 3 per cent 7 .
Mittal also enjoys cost benefits in its electricity input which is required in the running of the electric arc furnaces.It secured a 25-year deal with Eskom for its electricity costs, with modest price increases.Research by the Commodity Research Unit (CRU), an international body that analyses steel markets globally showed that in 2002, Mittal's electricity costs were $0.013/kWh as compared to a world average of $0.045/kWh.The cost of natural gas was $1.7/GJ compared to a world average $4.7/GJ in the same year 8 .Indeed, Mittal's Vanderbijlpark operating costs were $250/tonne (which ranked 17th out of 158 plants worldwide in terms of ascending costs) while the total world weighted average was $325/tonne 9 .
Mittal is backwardly integrated in the supply of its key inputs.These and other agreements help to maintain its market position and increase the barriers to entry for new entrants who are not able to secure such favourable terms.Mittal, in its annual results for the 12 months ended December 2004 presentation, highlights how its Vanderbijlpark and Saldanha operations are amongst the lowest operating cost producers of Hot-Rolled Coil (HRC) globally 10 .
Assessing conduct in the determination of excessive pricing: Mittal Steel SA's pricing of flat steel
Conduct is at the heart of understanding a firm's pricing and its wider impact; merely being a dominant entity is clearly not sufficient to conclude that a firm's pricing is excessive.Its conduct in how it sets prices should be analysed to see whether this takes into account any rivalry or if the prices are unilaterally imposed onto customers.The outcome of this conduct should be assessed to see whether it has an anticompetitive effect.Import parity pricing (IPP) was, until recently, the pricing mechanism employed by Mittal in its pricing of flat steel products.A range of costs are added to the free on board (Black Sea) price to arrive at an IPP for a given product specification for a given month 11 .These costs include shipment, transport, administration costs as well as a "hassle" factor (a non-price component that captures the hassle of shipping delays, delivery lags, etc.).This is compared to the list price given to Mittal's customers.After volume and settlement discounts are taken into account, the difference between the calculated hypothetical IPP and the local list price is credited/debited to/from the customer through an import parity discount/surcharge.It must be noted that this import price is a notional or hypothetical price and that the imports which such prices are based on do not physically come into South Africa.Below is a breakdown of how the IPP was calculated: The point is not, however, that import parity pricing is excessive but that in the circumstances of this market and industry in South Africa it reflects the sustained unilateral exertion of market power.These circumstances are, first, that Mittal has several cost advantages in the production of flat steel locally as described earlier and a pricing practice like IPP is not in any way cost-related.Second, and related, is that such cost advantages allow Mittal (and South Africa) to be a large net-exporter of basic flat steel products.In 2007, the domestic market for flat steel products accounted for 2.8 million tonnes, while 1.4 million tonnes of flat steel produced locally were sold in the export market, of which most output can be attributed to Mittal 12 .In a net-exporting country of flat steel products, the competitive price should tend towards the export price of the product.The opportunity cost of not supplying product into the local market is to sell those units into the export market at the achievable export price.The export market is more competitive than the local market given numerous international players.
The attractiveness of lower than IPP prices to the producer is well illustrated by price-cost comparisons and differential prices charged by Mittal to identified groups of customers.
Price-cost comparison
An excessive price can be looked at as one that covers by a large margin, the dominant firm's costs plus a reasonable rate of return.Theoretically, an efficient firm in a competitive environment under static conditions with little innovation, prices at marginal cost.In reality however, a firm's marginal costs are difficult to calculate.Instead, average variable costs are typically used in competition issues (see for instance Office of Fair Trading (OFT), 2003) although these are also difficult to determine.
Another complicating factor in undertaking price-cost comparisons is the allocation of common costs in multi-product firms.These are costs that arise from two or more products being produced together 13 .While these costs vary with output to a certain extent, it is difficult to allocate them to a particular product (Whish, 2003: 689).Often it is the case that the competition authorities are concerned only about one particular product of the firm (one product is thought to be priced excessively).
Generally, using any cost figure of the dominant firm's costs might be misleading.The firm's costs may be high because of inefficiency and hence, a price over an inflated cost is not a true representation of the extent of excessive pricing.The firm in question may be inefficient due to managerial slack (X-inefficiency) or due to complacency from the lack of effective competition as a disciplining tool.
It is just as difficult to determine what the standard reasonable rate of return figures or margins over cost should be.In industries that engage in high levels of R&D (e.g., pharmaceutical companies that develop new drugs) and those that have considerable intellectual property (e.g., software developers), there is little contention that such risk and innovation should be rewarded.Persistent high prices over costs in a prolonged period of time may sound alarm bells, but the effect may not necessarily be anti-competitive.It is when such high prices prevail over time and no new entry is witnessed that the situation may become a cause for concern (OFT, 2003: 6, 23, 34).
Purely comparing prices to a firm's marginal costs or other identified costs does not unambiguously provide an answer to what a competitive benchmark should be.In a recent decision involving the excessive pricing of prerace horseracing data 14 , the UK Appeal Court ruled that it was not correct to always equate the "economic value" of the product to the cost of producing the product plus a reasonable profit (that is, a price-cost analysis).The economic value of a product may be well above the cost of producing it if it encapsulates externalities or benefits (similar to the case of innovation and investment).In this case the Appeal Court felt that determining the competitive price by simply equating it to any justifiable allocation of cost of production and a reasonable rate of return failed to take into account the economic value of the pre-race horseracing data to the complainant and what it could make out of the data as a source of income.
Prices can also be compared across countries that have similar cost structures.In this case, when the prices in plants in different low-cost countries such as Taiwan and South Korea were compared with Mittal's prices (also a low cost producer), Mittal's margins were seen to be substantially higher 15 .The complainants noted the difficulty in undertaking robust international price-cost comparisons in that each country is faced with unique circumstances and domestic prices are often collected under very different conditions and assumptions.Drawing conclusions solely on such comparisons is therefore likely to be problematic.
Given the difficulty in undertaking accurate direct price-cost comparisons and the potential ambiguity of the results, price-cost assessments need to be viewed in a wider context of firm conduct.
Comparators that show what prices would tend to under effective competition
The role of comparators in assessing excessive pricing is to identify the level to which prices would tend to under effectively competitive conditions.This then offers a benchmark to which the existing price can be compared 16 .
In this case the comparators themselves were indicators of conduct, with the prices being the result of specific arrangements consistent with the exertion of market power to differing degrees.
a. Secondary export rebates
Secondary export rebates are awarded to firms that buy steel from Mittal, add value to it and then export the value-added product.The rebate is only given after strict conditions are adhered to that show that Mittal's product was used as an input material, at least twenty per cent value was added to the steel and the value-added product was indeed exported.Between 2001 and 2004, a significant amount was given in the form of secondary export rebates.
Mittal itself explains that the price faced by local firms after they receive the secondary export rebates is close to the export price.As mentioned earlier, in a net-exporting country the competitive price would tend towards the export price given international competition for the product.The price received by these firms that export may be a more competitive price than those paid by the general population and hence is a useful comparator (although the limitations of using the export price as a comparator are explained later).This price is significantly lower than the IPP and was clearly revealed in the pipe and tube industry, one that benefits greatly from this class of rebate.Heavy earth-moving and construction machinery manufacturers and manufacturers of conveyor belts also receive such rebated steel prices for the products that they export.
b. Rebates for import-competing products
In order not to lose market share to imported downstream products, either in raw material or final finished product imports, Mittal offers rebates to such firms that face this type of competition.Heavy equipment manufacturers have received this rebate as an apparent response to the threat of potential imports from Australia, Germany and Sweden 17 .Conveyor manufacturers also receive rebates to enable them to compete against imports and alternative construction materials.Substantial rebates were granted to flange 18 manufacturers as protection from cheaper Chinese imports.
c. Special industry deals
Another rebate class offered to certain customers is the special industry deal rebate.These are given to customers that have substantial buyer power and who use this power to negotiate with Mittal.These include the favourable rebates given to the packaging industry and the automotive industry.
One large firm in the packaging industry receives a special deal that is calculated using a cost plus basis formula for the tin plate that they procure from Mittal 19 .This formula sets the price level of the tinplated steel sold to the company and the price increases by taking into account a weighted basket of the world price of tin, the price of the input materials -gas at contract prices, iron ore and local coking coal price changes proxied by the producer price index (PPI), imported coking coal at the actual price, as well as salaries, electricity and other costs 20 .This price, in contrast to the IPP, is more representative of the costs of manufacturing tinplated steel.As such, it provides an idea of what a reasonably competitive price benchmark should be, and this price is considerably lower than the IPP facing most customers.
The automotive industry is another powerful buyer that is able to negotiate favourable deals with Mittal.The price that firms in this industry receive is based on an ex-works price in the EU and adjustments in local PPI.The industry association insisted that in order to achieve international price competitiveness in the automotive industry, Mittal would have to adopt another pricing mechanism and not use the existing IPP model.Industry representatives suggested that an ex-works basis was internationally more competitive given that it is a cost-driven figure 21 .Mittal recognises that if prices in this industry were too high, manufacturers of cars would relocate their production capacity entirely or resort to importing components or full vehicles rather than using local steel.The price received by the automotive industry is therefore not based on the IPP mechanism but is one in which low underlying cost pressures reflected in the PPI are passed on to the benefit of steel buyers.Industries that did not qualify for this special deal paid substantially higher prices than the price given to the automotive industry between 2004 and 2005 for the same product.
d. Rebates in the face of threat of substitute products
Another comparator used by the complainants was the price given to firms to enable them to compete with substitute products such as aluminium, plastic, cement and timber.Prices to customers that could potentially switch to these products showed the level prices tended to when there was some form of competition from alternative products.For instance, aluminium cans, plastic or glass bottles for the food and beverage industry could be used instead of steel cans, cement or clay instead of steel roof tiles, timber trusses instead of steel trusses.Mittal accepts that such products affect their market and that it takes into account the prices of these potential substitute products available for use in certain industries when pricing its steel.
e. Mittal's export price of flat steel products
Mittal directly exports its products through an exclusive agreement via a joint venture with Macsteel International BV, a listed company in the Netherlands.Part of this agreement stipulates that the product exported cannot be re-imported and sold into the domestic market, in effect meaning that exports can be a means of reducing supply to the local market.Export prices of products have been substantially lower than local prices, sometimes up to sixty per cent 22 .As explained above, prices in an open economy under effective competition should tend towards the export price.If effective local competition drives the local price lower than the export price, producers would choose to export the product rather than to sell it to the local market.This export price would then be a good benchmark on which the economic value of a product could be determined in a net-exporting country.
However, for export prices to be a useful benchmark for competition, these prices have to adequately cover the firm's costs of production and yield sustainable returns in the long run.This case raised two important concerns in this regard.First, in cyclical industries such as the global steel industry achievable prices for exports may well cover costs and offer sustainable returns at the peak of the cycle but not necessarily over the whole cycle.Therefore, it is important in such industries to understand the dynamics of the global market and to compare export prices over a longer period of time that adequately covers a given cycle.
While Mittal argued that the steel prices in 2005 and 2006 represented a peak from which they would soon fall, evidence on the decisions made to expand production by Mittal seemed to suggest that it expected global prices to be attractive in the near future.Mittal exports around forty per cent of its product.It planned to increase its capacity to produce even more steel, of which much, if not all, would be exported given that supply already outstrips local demand.Therefore, such expansion plans are based predominantly on export prices that will be realised by the sale of the output of the additional new capacity to the export market.Such extensive expansion plans based on export prices achievable would not rationally be made if these export prices did not cover costs and did not yield sustainable returns.Moreover, as was revealed in the course of the hearings, even on a pessimistic outlook, export prices were expected to yield positive returns on investment.Export prices for flat steel product from Vanderbijlpark, according to figures put up by Mittal's own experts and adjusted to take into account differences in product mix and other additional considerations (such as higher export price countries due to anti-dumping restrictions), show that these prices more than adequately cover costs and do so by a large margin.
While the export price can be a useful benchmark in a net exporting country, the actual price charged in the local market should also take into account any costs for relevant quality and dimension extras.In certain industries, such as the steel industry, exported product is often more basic and with less value addition than product sold locally.
An additional important consideration when identifying benchmarks for competition, or when undertaking a price cost analysis, is whether the firm in question (or a competing firm whose pricing is identified as an appropriate benchmark) is an efficient firm.Costs of inefficient firms are likely to be inflated, masking the true margins.Although Mittal enjoys low production costs, this does not automatically mean that it is an efficient firm.The fact that it faces low production costs is due to the favourable input material costs that Mittal receives and not particularly because of high levels of productivity, innovation and efficiency that serve to reduce operating costs.Therefore, although Mittal may be in the lowest cost quartile of international steel plants, this reflects its lucky position in a country blessed with abundant and low cost material inputs into which Mittal is either backwardly integrated, or in which it has received preferential rates through long-term supply agreements.
Further, Mittal's annual reports and presentations reveal the efficiencies gained between 2001 and 2005.These include savings made under the Business Assistance Agreement (BAA) and amount cumulatively to around R1 billion per year.Mittal itself recognises that it was hugely inefficient in the past as it tried to cater for all its customers' unique needs and produced around 500 different grades of steel at Vanderbijlpark.This was later reduced to 120 grades following a rationalisation programme which resulted in major cost savings 23 .Largescale retrenchments (around 10 000 workers) were also undertaken as a drive to improve efficiency 24 .While it does not necessarily follow that embarking on these efficiencies now means that Mittal was inefficient in the past, what is notable is the magnitude of the cost savings achieved in a short space of time as well as the fact that none of these savings are from new state-of-the-art technological advances.In other words, such efficiencies and cost savings could have been achieved years ago.
Even though innovation can be seen broadly to include new ways of marketing a product, new methods of production, new sources of supply of raw materials, or process innovations, as proposed by Schumpeter (1934), improved management as part of restructuring a former state-owned industry is not innovation.In this regard, it appears as if the cost savings made at Mittal Steel SA, planned and already implemented, were basic improvements that could have been achieved by Iscor even before Mittal acquired it 25 .This suggests that these improvements in efficiency may have been a result of incentivised management under private ownership and not as a result of any process innovations.
Summary
The above comparators reveal the conduct of Mittal in terms of unilateral price-setting and market segmentation consistent with the unilateral exertion of market power.They also illustrate where greater competitive discipline exists, whether directly or indirectly, Mittal responds in its pricing but prevents its overall pricing on the majority of its product being undermined.The majority of customers paid prices set at import parity levels.The comparators thus also allow for assessments of anti-competitive effect in that they provide situations where the counterfactual of prices under effective competitive rivalry can be considered.
It is important to note that such discriminatory pricing practices can only be sustained if arbitrage can explicitly be prevented between the different price groups.Mittal has in place checks and balances that serve to monitor very strictly that arbitrage between the different customer groups or designated uses of the product do not take place.
It may be argued that the different prices may just reflect consumers' willingness to pay.However, this merely goes to the essence of the monopoly pricing decision, taking into account the nature of demand and, in this case, imperfect alternatives in the form of imported steel.Higher prices in South Africa may also be argued just to reflect higher production costs than in other countries but we have shown that this was not the case here.In addition, there is no guarantee that comparing the dominant firm's prices with a competitor's prices will provide an objective idea of what a competitive price should be; the competitor's price itself may be excessive or predatory (Evans & Padilla, 2005: 109).Lastly, for a comparator to be a benchmark for competitive prices, it has to represent an outcome that is sustainable.This relates to the issue of the efficiency of the firm in question, which I address in more detail in the following section.
Is there a role for a profitability analysis in an excessive pricing case?
Evaluating whether prices are excessive could be equated with evaluating whether profits are excessive.For example, according to the UK's OFT, "the ability of an undertaking…to earn excessive profits may provide evidence that it possesses some degree of market power" (OFT, 1999: 7).The level of profitability could therefore be another test used to assess the extent of excessive pricing and indeed an important part of Mittal's defence rested on such a profitability analysis.This section looks at the use of a profitability analysis and argues that there are substantial pitfalls in using this as a measure of excessive pricing.At the outset it is important to remember that our task is to evaluate conduct while profitability is an accounting measure of performance.Using accounting measures in general could however be problematic given that there are a number of interpretations of which measures to use.Indeed, as Lind and Walker (2003: 1) observe, "measuring profitability in an economically meaningful way is virtually impossible to do for any complex business."While measures include internal rate of return (IRR) and net present value (NPV) these generally use data represented by a firm for accounting purposes, raising various problems including the valuation of assets.A firm may value assets based on historical costs of capital.But for firms that purchased their assets a long time ago and were in a high inflationary environment, the value of their assets reported may be very low, making their profitability (profits divided by assets employed) appear excessively high.A firm may argue that it should price in order to cover the replacement cost of assets yet the investments may have been made by the state and in any event would never be repeated in this form 26 .
In industries such as fixed-line telecommunications with large sunk costs, regulators engage in detailed exercises to assess pricing in relation to long-run incremental costs to reflect the costs of expanding capacity.In the case of the steel industry, experts who compare costs of steel plants globally calculate a "sustaining capital expense", which is the actual average annual capital expenditure needed to maintain the plant at a constant condition in the long run.For Mittal this was much lower than the replacement cost of the whole plant 27 .
For profitability to be indicative of anticompetitive conduct one must also assume that the firm is otherwise being run efficiently and that the profits are not due returns for innovative activities, as discussed above.A firm that is run inefficiently is likely to have inflated costs and therefore its profit margins would be squeezed.Principal-agent theory suggests this is possible where, in a monopoly situation, weak shareholder monitoring of management allows management to engage in satisficing behaviour.Conversely, vigorous competitive rivalry stimulates management effort and greater productive efficiency.Other problems with this method of analysis include accounting for the cyclical nature of some industries, such as the steel industry.
Given the numerous problems related to the use of appropriate accounting measures, the use of profitability as the main test in assessing excessive pricing is generally discouraged.It involves competition authorities engaging in very detailed evaluations of accounting data, something they are not generally well-equipped to do.However, profitability can provide useful insights as part of the overall picture if due attention is paid to the various factors we have identified.
Some conclusions
Excessive pricing is, by definition, about understanding a particular conduct.The structure that allows the conduct is clearly a crucial part of the assessment and may indeed be illuminated by the conduct itself.In the case discussed here, there are high and non-transitory barriers to entry that collectively serve to limit competition in the South African flat steel industry, and this was reinforced by the various ways in which Mittal engaged in unilateral pricesetting of its products.The structural conditions thus facilitate the abuse as they provide for durable or sustained market power, but they are not sufficient to conclude that a dominant firm has engaged in excessive pricing.It is very important to assess conduct that represents a unilateral abuse of market power in terms of the nature of the industry and markets in question, as well as how the firm came to be in such a position of dominance.
Dominant firms are prevalent in many other industries in South Africa, given its small size, legacy of apartheid and previous strong state ownership and support.There are significant barriers that limit the contestability of markets and result in super-dominant firms being able to mark up prices substantially above a level that is deemed competitive and reflective of economic value.
The Tribunal, in ruling that Mittal had abused its dominance, strongly emphasised the incontestable and uncontested nature of the market, with conduct not being subject to the constraining presence of a regulator or of a potential entrant.Indeed, according to the Tribunal, section 8(a) uniquely applies to such industries where firms can be characterised as "super-dominant".As stated in the ruling, "it is conduct that abuses a structural advantage -dominance or, in Section 8(a)'s case, 'superdominance' -that is prohibited" 28 .
This broadly follows the suggested structural conditions.Given this interpretation, the Tribunal found that Mittal was indeed a super-dominant firm.It operated in an incontestable market with entry barriers that were established by historical circumstances as well as technological and commercial considerations which have had as great an impact as barriers constituted by law or license 29 .The Tribunal further found that the essential ancillary conduct that flowed out of this super-dominant structure was to withhold local supply 30 , facilitated by the exclusive export arrangement with Macsteel that allowed the segmenting of the domestic and export markets.While segmentation within the local market through the various rebates discussed in the complainants' comparators approach was also considered by the Tribunal, it was made clear that the purpose of such consideration was strictly not to arrive at a level of price that would be lawful or non-excessive compared to an identified unlawful or excessive price 31 .This would be more the role of a price regulator and not one which a competition authority would seek to undertake.
Therefore the Tribunal's test of excessive pricing is summarised as follows: [W]here the price appears to have no explanation other than the pure exercise of monopoly power [as evidenced by the structure of the market and any relevant ancillary conduct on the part of the dominant firm], then the price is not reasonable in relation to economic value 32 Since the Tribunal's interpretation would involve an examination of the underlying market considerations that lead to the price level (rather than the price level itself), the risk of penalising firms that charge high prices due to extensive innovation and differentiation is reduced 33 .In this case it was clear that the flat steel product market was not one such dynamic Local IPP price determination Import price (US$ FOB) + Shipping cost = C&F Durban + 5% duty (since been abolished) + offloading & admin (+1%) + Premium (hassle factor) : 5% = Import price at coast (US$); US$ import price x exchange rate = Import price at coast (Rand) + transport costs to bring product to Vanderbijlpark = min import price to customer Local price (from price list) Difference between minimum import price and list price = recommended IPP (Import parity price adjustment) = Local price after IPP . For example, Motta and de Streel caution that "the Commission should only intervene in cases of very strong dominance (confined to a monopoly or near monopoly) that are caused by past or current legal barriers" (Motta and de Streel, 2006: 124).This is reinforced by Evans and Padilla (2005: 100).
|
v3-fos-license
|
2022-01-28T16:41:13.863Z
|
2022-01-25T00:00:00.000
|
246313931
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "4a1a542fdd2be5a5c2b04f88d3ed04047d8c9249",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46571",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"sha1": "9460404fb6ebf37b9cab14e5316cff205775c04f",
"year": 2022
}
|
pes2o/s2orc
|
Rubisco forms a lattice inside alpha-carboxysomes
Despite the importance of microcompartments in prokaryotic biology and bioengineering, structural heterogeneity has prevented a complete understanding of their architecture, ultrastructure, and spatial organization. Here, we employ cryo-electron tomography to image α-carboxysomes, a pseudo-icosahedral microcompartment responsible for carbon fixation. We have solved a high-resolution subtomogram average of the Rubisco cargo inside the carboxysome, and determined the arrangement of the enzyme. We find that the H. neapolitanus Rubisco polymerizes in vivo, mediated by the small Rubisco subunit. These fibrils can further pack to form a lattice with six-fold pseudo-symmetry. This arrangement preserves freedom of motion and accessibility around the Rubisco active site and the binding sites for two other carboxysome proteins, CsoSCA (a carbonic anhydrase) and the disordered CsoS2, even at Rubisco concentrations exceeding 800 μM. This characterization of Rubisco cargo inside the α-carboxysome provides insight into the balance between order and disorder in microcompartment organization.
Remarks to the Author: I very much enjoyed reading this manuscript, which describes an interesting set of data relating to packing arrangements of Rubisco in Halothiobacillus neapolitanus carboxysomes. It utilizes cryo electron tomography, coupled with existing Rubisco crystal structures, to solve Rubisco arrangements within purified carboxysomes. A take-home result is that high concentrations of Rubisco within carboxysomes leads to increasingly ordered packing of the enzyme, via observable interaction interfaces. While the manuscript provides very interesting insights into Rubisco ordering in these microcompartments, I felt that more could be done to extend the impact of the manuscript and further elucidate the observed interactions between Rubisco holoenzymes. For example, can the interactions be eliminated through mutation of the interacting interfaces? Are the interacting interfaces highly conserved? What would be expected in carboxysomes from different organisms given the conservation (or lack thereof)? A general question relating to the observed 'classes' of carboxysomes based on their relative packing density (sparse, dense, ordered): could the observed heterogeneity in cbx composition and structure be due, in part, to the growth conditions of Halo and/or the potential lack of growth dependency on cbx rubisco in these organisms (they can grow heterotrophically and have more than one Rubisco). Would a system or a growth condition which enforces cbx growth dependency provide more definitive results? Also, is there a way of confirming that the observed interactions are not artefactual? It is worth noting that similar tightly packed Rubisco, appearing as a paracrystalline array, have been observed in beta carboxysomes (e.g. Kaneko et al.) and in older EM images of pyrenoids, both have which have since been described as liquid condensates which can become even more freely moving within oxidised beta carboxysomes, suggesting that these previous paracrystalline observations were potentially artefactual. Can the authors provide comment to assure the reader the structures observed here are a genuine reflection of functional alpha-cbx? In general, I think the work is highly interesting but would be suitable for this journal if additional analyses were carried out, as highlighted in specific comments below. I would be extremely happy to review a version of the manuscript with amendments.
Specific comments: Line 36: An additional reference for consideration here is the recent review by Hennacy and Jonikas (Annual Review of Plant Biology), further highlighting the importance of transplanting such systems to a non-native host. Line 40-41: the alpha-cbx of Halo is a 'representative' of cbx's found also in cyano's and other chemoautotrophs? Line 44: 'could potentially', is the evidence for the Rubisco-S2 interaction inconclusive? Fig 1. Gene names should be italicized but not capitalized. Alternatively, protein names should be capitalized and not italicized. Line 56: 'non-equilibrium bicarbonate pool'. This is not clear for those who do not understand CO2 chemistry. Perhaps expand to indicate that there is specifically a disequilibrium between CO2 and bicarbonate, in favour of bicarbonate, within the cytoplasm? Line 58-61: Models including Kaplan, Badger, Mangan, Long etc. seem to suggest that encapsulation is sufficient. However, all these models are based on unknown diffusional limitations of the shell. Therefore, the primary unknown is the real diffusional resistance of the shell and does not appear to necessitate allosteric regulation of kinetics. Line 66-67: What evidence is presented to suggest this unique feature leads to the rate of CO2 fixation? Comparison of active site turnover numbers for the different packing classes of carboxysomes would provide this information. Purification of numerous carboxysome preparations from cells grown under a variety of conditions may lead to different proportions of packing types, therefore enabling such analysis. Correlation of packing type and site-dependent Rubisco turnover rates would clarify if packing arrangement led to differential (and possibly allosteric) regulation of Rubisco function. Angstrom offset which may correlate very well with observed packing in rods, possibly contributing to rod-formation by an iterative extension of shell formation and Rubisco organization. Again, this highlights an opportunity to assess this via mutational analysis to manipulate the Rubisco interacting interfaces. In addition, how conserved are the proposed interacting interface residues? It would be safe to assume that there should be a high degree of conservation in these regions of Rubisco large and small subunits. Fig 5D. Please identify what is assumed to be CsoSCA and Rubisco in this image, as pointed out in line 195-198. Lines 122-129: This short paragraph seems to take some time to let the reader know that S2 is possibly the link to Rubisco here. Perhaps it could be written more succinctly and definitively. Lines 174-180: Is there an observed Rubisco:CsoS1 interface? Would observation of rod-like cbx's offer any clues? Line 202-203: a large 'enzyme' consistent in size and shape with a 20S proteasome? No evidence is presented for what appears to be a disordered aggregate of protein, I assume in the first image in Ext. Line 223-225: Note that b-cbx have been observed as paracrystalline in cyanobacteria (Kaneko et al, via TEM), despite current knowledge that the interior of these cbx's should be more motile. This raises the question of whether the observations here and in many circumstances might result as artefacts from the sample preparation process or if they are indeed real. This is where further analysis could be extremely helpful. For example, can specific Rubisco mutations still enable entrapment of the enzyme in carboxysomes but eliminate organised structure at high internal concentrations? Line 246-247 is a key outcome here. It implies that an evolved, preferential high-concentrationdependent packing will still maintain Rubisco function under 'saturating' carboxysome filling conditions. This should be emphasised, and it implies a calculable upper limit of Rubisco concentrations within carboxysomes which is driven entirely by the observed arrangement. How does this observed Rubisco packing compare with what has been described as 'Kepler packing' as an estimate of maximum Rubisco quantities in both alpha and beta carboxysomes? Extended Fig 1. Is it possible to delineate, on these data plots, the apparent 'classes' of cbx (dense, sparse, ordered)?
Reviewer #2: Remarks to the Author: "Rubisco forms a lattice inside alpha-carboxysomes" Reviewed by Manon Demulder and Benjamin Engel Metskas et al. use cryo-electron tomography (cryo-ET) to provide the first high-resolution visualization of the alpha-carboxysome, which has remained rather enigmatic compared to the better understood beta-carboxysome. Surprisingly, the authors observe linear fibrils of Rubisco within some of the isolated carboxysomes. They then generate a subtomogram average of this Rubisco at a fairly impressive 4.5 Å resolution to show that fibril formation is apparently mediated by interactions between the Rubisco small subunits. The polymerization of Rubisco into fibrils is a novel and interesting result, which is of high interest to the whole field of carbon concentration/fixation. The transition from Rubisco liquid condensate into ordered fibril is also of broad interest to the field of phase separation, and may even carry potential interest all the way out to fields such as neurodegeneration, which study these liquid-to-solid state transitions. However, there is one major issue with this study, which should either be addressed with additional experiments, or be acknowledged with careful rewording of the text and its interpretations.
Major issue:
A major limitation of this study is that the cryo-ET was performed on isolated carboxysomes. Some features such as the Rubisco fibrils are likely also present within the cell, as I cannot imagine how isolation of carboxysomes would induce fibril formation. However, the isolation procedure and subsequent blotting onto EM grids are unquestioningly disruptive--this is clearly shown by the aggregate of broken carboxysome shells in Fig. 5D. Furthermore, I know from my own experience that larger isolated structures (e.g., centrioles) are very easily compressed when they are blotted and frozen into a thin layer of ice on an EM grid. This compression is unavoidable, and is often dramatic. Compression would almost certainly impact the organization of Rubisco within the isolated carboxysomes, especially given the flexibly/disorder of the CsoS2 linker. It could also detach Rubisco in the carboxysome interior from the layer of Rubisco more tightly bound to the carboxysome wall. The authors conclude that "the most consistent feature of the CBs is their compositional and structural heterogeneity" (Line 200), but with the present data, it is not possible to say whether this is a result of carboxysome biology or an artifact of the sample preparation. Specifically, the isolation and blotting of carboxysomes could potentially affect the following analysis presented in this study: 1) The relative abundance of Rubisco fibrils (Figs. 2A, 4A) Line 94: "Roughly one third of the carboxysomes displayed ordered packing"; Line 131: "The ratio of bound and free Rubisco concentrations scales with Rubisco concentration in the CB ( Figure 4A)." -> Are these numbers representative of the biology or the sample preparation?
2) The bending angle and twist between Rubisco complexes along a fibril (Fig. 4C) Line 140: "There is a bending angle of 3 ± 2.5 degrees and a 14-degree standard deviation in the twist between stacked Rubiscos." -> Is this fibril bending and twisting present in the cell, or is it an artifact of compression forces caused by the blotting?
3) The higher-order packing of fibrils into a twisted hexagonal lattice (Fig. 3) Line 117: "Rendering all Rubisco fibrils within a CB displays the ordered phase: a loose hexagonal lattice-like ultrastructure of Rubisco fibrils twisting about each other with six-fold pseudo-symmetry ( Figure 3B). The fibrils are held at a distance of 12.5 ± 0.7 nm with a tilt of 10 ± 3 degrees 119 ( Figure 3D)." -> Again, is this lattice spacing and tilt affected by the blotting forces?
4) The packing and orientations of non-fibril Rubisco in carboxysomes (Extended Figs. 1,3) Line 95: "Other CBs displayed a range of Rubisco concentrations, from sparse to dense (Extended Data Figure 1). A nearest-neighbor Rubisco alignment search across all CBs revealed a concentration dependence: Rubisco orientation is random at low concentration (sparse CBs), but becomes increasingly non-random as concentration rises (dense CBs, Extended Data Figure 3)."; Line 205: "Finally, even in the most ordered alpha-carboxysomes, a substantial portion of the Rubisco does not participate in the lattice ( Figure 3B). This heterogeneity is consistent with models of CB assembly, which are based on phase separation and binding affinities rather than a tightly ordered, regular assembly mechanism" -> Is the observed heterogeneity in Rubisco concentration and organization representative of biological variability or artifacts from the isolation and blotting? 5) Rubisco organization and concentration next to the carboxysome shell compared to the carboxysome interior (Fig. 5) Line 174: "The Rubiscos adjacent to the shell occasionally participate in the long fibrils, but not consistently, and appear to behave differently than Rubisco in the CB interior."; Line 176: "the layer is well-populated even in otherwise sparse CBs. We analyzed the angle of the Rubisco C4 axis relative to the shell and found a predominantly random distribution… ( Figure 5B)." -> Is this distinction in the organization of shell-adjacent Rubisco also seen within native cells, or have forces from the isolation and blotting detached Rubisco fibrils in the carboxysome interior from the layer of Rubisco more tightly bound to the carboxysome wall? 6) Proposals about carboxysome assembly/maturation mechanisms (Discussion) Line 219: "Sparse carboxysomes lack the concentration necessary to form an ultrastructure, but the dense and ordered packings overlap in Rubisco concentration. All packing types display polymerization of the Rubisco; therefore, the primary distinction between dense and ordered CB packing is whether the fibrils align"; Line 226: "Therefore, the phase transition between dense and ordered packing may involve the CB maturation trajectory." -> Are we really seeing a maturation-driven phase transition, or are some carboxysomes more disrupted than others? I am surprised and a bit disappointed that the authors did not complement this analysis of isolated carboxysomes with some bona fide in situ data of FIB-milled Halothiobacillus cells. This approach is becoming mainstream for leading cryo-ET labs, and indeed, the last author of this study has access to the necessary FIB/SEM instrumentation, as evidenced by their recent publications: Regardless of issues associated with the isolation procedure, the Rubisco fibrils appear to be real, and they are a novel finding of high interest to the field. As far as I can tell, the subtomogram averaging performed on these fibrils is methodologically solid, providing some clues into how the fibrils may form via interactions between the small subunits. Thus, I see two possible routes to publication: 1) The authors explicitly describe this work as an in vitro study of isolated carboxysomes. They emphasize the aspects that likely remain true in these isolated conditions, such as the Rubisco-Rubisco structural interactions that establish the fibrils, while the other analyses that may be affected by the isolation (outlined above) should be carefully qualified. 2) The authors add complementary in situ cryo-ET from FIB-milled cells. This does not have to be an extensive dataset, but in situ analysis would be highly valuable to confirm the in vitro observations and add reliable information about the prevalence of different Rubisco organizations within native cells. I think route #2 is definitely preferred and will significantly increase the impact of this study. However, if the authors opt for route #1, I would not object to publication as long as the conclusions are properly qualified.
Minor issues and requested text changes: Line 22, Abstract: "Here, we use cryo electron tomography and subtomogram averaging to determine an in situ structure of Rubisco at 4.5 Å" -> This is certainly NOT an in situ structure. In situ means within the original natural environment. Specifically regarding cryo-ET, in situ means inside the cell, as has been demonstrated in many previous cryo-ET studies of intact small cells or larger cells that have been thinned with a cryomicrotome or by focused ion beam milling. The Rubisco subtomogram average in this study is derived from isolated carboxysomes, which are not in their native state.
Line 29, Abstract: In the abstract, it is claimed that the insights into Rubisco fibril formation "provide future directions for bioengineering of microcompartments." -> In the discussion section, the authors should provide some detail on how Rubisco fibrils open these new bioengineering opportunities. Conversely, if it is too early to predict these possible applications, this claim could (should!) be removed from the abstract.
Line 66, Introduction: "This unique feature of the alpha-carboxysome likely helps to increase the rate of carbon fixation inside the CB." -> How do the authors know that Rubisco polymerization is a unique feature of alpha-carboxysomes? Beta-carboxysomes also contain a pseudo-crystalline lattice, which has not yet been studied at high resolution. Also, the authors observe different degrees of order with similar Rubisco concentrations, so fibrils aren't simply a mechanism for increasing the concentration of Rubisco within the carboxysome (Line 219: "the dense and ordered packings overlap in Rubisco concentration"). Thus, without mechanistic studies specifically disrupting the fibrils, it is probably too early to conclude that Rubisco polymerization likely helps increase the rate of carbon fixation.
Line 117, Results: "Rendering all Rubisco fibrils within a CB displays the ordered phase: a loose hexagonal lattice-like ultrastructure of Rubisco fibrils twisting about each other with six-fold pseudosymmetry ( Figure 3B)." -> How often was this hexagonal lattice observed in the dataset of 139 carboxysomes? Figs. 3A and 3C appear to analyze many carboxysomes, but the prevalence of this lattice is not clear to me. The figure legend claims that Fig. 3B is a representative example. In the supplement, please show at least four more examples of such hexagonal lattices in other carboxysomes.
Line 119, Results: "The fibrils are held at a distance of 12.5 ± 0.7 nm with a tilt of 10 ± 3 degrees ( Figure 3D)." -> Is the packing and helical tilt between neighbor fibrils always the same, or are there variations between carboxysomes? If not done already, please quantify this for the full dataset (It's not clear to me whether these numbers are averaged from every hexagonal lattice in the dataset or just from the example in Fig. 3B).
Line 124, Results: " We hypothesize that a disordered, lower-occupancy binding partner may maintain a maximum distance and promote tilt between fibrils. Such a linker would likely not be visible in a subtomogram average due to the combination of low occupancy and disorder, especially if there were also disorder in the binding site, causing the system to act as a 'fuzzy complex'. CsoS2 has the appropriate disorder, length and Rubisco binding sites to serve this role." -> Wouldn't a low-occupancy disordered linker promote significant flexibility between the Rubisco fibrils instead of this specific spacing and tilt? Perhaps there is a more physical packing explanation for the parameters of the observed hexagonal lattice? Line 165, Results: "Rubisco crystal structures also display variable interactions in this region. H. neapolitanus Rubisco structures 1SVD and 6UEW both show alternative longitudinal interactions in the crystal structure involving helix 3 (Extended Data Figure 4). Both crystal structure interactions lie within the range represented in our data, suggesting these conformations may be sampled in vivo (though not the dominant conformation)." -> Two comments here. First, please be careful with the wording because isolated carboxysomes are not in vivo. Second, it would be helpful to provide some context for these two PDB structures. 1SVD is an apo structure, whereas 6UEW is bound by CsoS2 N-peptide, but this is not clear from the text. Line 187, Results: "7 Å C1 interior Rubisco subtomogram average" (shown in Fig. 5C) -> This Rubisco average appears to be filtered to a resolution that is too high, resulting in noisy spikey extensions all over the structure. Likely the 7 Å resolution is overestimated, and the map density should be lowpass filtered to a lower resolution (maybe 15 Å?) for more accurate display. I don't think this will negatively affect the point that the authors are trying to make here.
Line 195, Results: "We did however observe many shell-attached densities in the tomograms, which are best visible in tomograms of broken CB shells ( Figure 5D). Some of these densities are the correct size for carbonic anhydrase, which also is capable of binding Rubisco and was previously found to be shell associated" -> Correct me if I am wrong, but didn't Blikstad et al. 2021 find that the carbonic anhydrase binds Rubisco and not the shell? I quote from that study: "This screen showed that CsoSCA interacted with Rubisco, while none of the other carboxysome proteins had detectable binding above background." Could these shell-attached densities instead by aggregates of CsoS2, which does interact with the shell? Note that rupture of carboxysomes and resulting exposure of CsoS2 to the buffer would likely result in aggregation.
Line 202, Results: "roughly 5% of CBs also contain a large enzyme that is consistent in size and shape with a 20S proteasome (Extended Data Figure 5)." -> This is very interesting, but it doesn't look like a 20S proteasome to me (I stared at many proteasome tomograms during my postdoc). Without a structural average, it's just guesswork, and I understand that there aren't enough particles for a good average. But… It looks quite similar to this archaeal chaperonin (See Fig. 1 https://doi.org/10.1016/j.str.2011.03.005).
Line 218, Discussion: "We observed three Rubisco packing types in the CBs: sparse, dense and ordered ( Figure 2A)." -> For clarity, it would be very helpful to include a small supplemental table detailing the number of carboxysomes in the tomograms that were ruptured / intact sparse / intact dense / intact ordered. This information could also be incorporated into Extended Fig. 1, to help readers understand how parameters such as Rubisco concentration correlate with the above classes.
Line 229, Discussion: "Polymerization is a highly effective packing strategy, and does not obstruct the Rubisco active site nor CsoS2 and CsoSCA binding sites." -> Without getting overly speculative, the authors could expand on this idea a bit to discuss what physiological advantages Rubisco fibril formation could bring. Also, do the authors expect other alphacarboxysomes to form Rubisco fibrils? What about beta-carboxysomes, which have also not been studied yet at high resolution? Line 482, Methods: "Rubisco subtomogram average unfiltered half-maps and sharpened full map will be deposited in the EMDB at time of publication. Raw tomogram movie frames may be accessed through the Caltech Electron Tomography Database (https://etdb.caltech.edu/) upon publication." -> Please also deposit example tomograms in the EMDB and raw tomogram movie frames in EMPIAR. The Caltech ETDB is great, but EMDB/EMPIAR remains the main community repository for now, so it should also receive copies of the data.
I very much enjoyed reading this manuscript, which describes an interesting set of data relating to packing arrangements of Rubisco in Halothiobacillus neapolitanus carboxysomes. It utilizes cryo electron tomography, coupled with existing Rubisco crystal structures, to solve Rubisco arrangements within purified carboxysomes. A take-home result is that high concentrations of Rubisco within carboxysomes leads to increasingly ordered packing of the enzyme, via observable interaction interfaces. While the manuscript provides very interesting insights into Rubisco ordering in these microcompartments, I felt that more could be done to extend the impact of the manuscript and further elucidate the observed interactions between Rubisco holoenzymes. For example, can the interactions be eliminated through mutation of the interacting interfaces? Are the interacting interfaces highly conserved? What would be expected in carboxysomes from different organisms given the conservation (or lack thereof)?
A general question relating to the observed 'classes' of carboxysomes based on their relative packing density (sparse, dense, ordered): could the observed heterogeneity in cbx composition and structure be due, in part, to the growth conditions of Halo and/or the potential lack of growth dependency on cbx rubisco in these organisms (they can grow heterotrophically and have more than one Rubisco). Would a system or a growth condition which enforces cbx growth dependency provide more definitive results? Also, is there a way of confirming that the observed interactions are not artefactual? It is worth noting that similar tightly packed Rubisco, appearing as a paracrystalline array, have been observed in beta carboxysomes (e.g. Kaneko et al.) and in older EM images of pyrenoids, both have which have since been described as liquid condensates which can become even more freely moving within oxidised beta carboxysomes, suggesting that these previous paracrystalline observations were potentially artefactual. Can the authors provide comment to assure the reader the structures observed here are a genuine reflection of functional alpha-cbx? In general, I think the work is highly interesting but would be suitable for this journal if additional analyses were carried out, as highlighted in specific comments below. I would be extremely happy to review a version of the manuscript with amendments.
We thank the reviewer for this thoughtful review and consideration of biological context. We have added several components to the manuscript in response, including analyses of conservation, packing, and in vivo tomography.
Our analysis widely differs from the referenced work in other systems. The resolution of our dataset leaves little ambiguity regarding the organization of the CBs in vitro, thanks to our care to accurately identify 98-99% of Rubisco complexes inside the carboxysomes prior to averaging. To guard against potential purification artefacts, we have now confirmed with in vivo tomography that both the dense and ordered CB packing types are present inside Halothiobacillus neapolitanus cells. We do believe that the sparse CBs are either overrepresented due to our purification protocol or the result of Rubisco leakage from dense CBs (under-represented in vitro compared to in vivo), which we have adjusted the manuscript to clarify.
The Rubisco-Rubisco binding site does not present clear residues that would direct binding, so we performed a conservation analysis. We found that this site has some of the lowest conservation in the Rubisco complex. While disappointing from the perspective of presenting targets for mutagenesis, this is consistent with our envisioned model of the lattice being primarily a harm-reduction mechanism (avoiding rampant cytosolic polymerization, tightlypacked crystallization that could obstruct the active site, etc.). A harm-reduction mechanism would result in negative selection that is structurally diffuse, as we see here, rather than the positive selection that would result in clear targets for mutation.
Given this lack of a single clear target, unfortunately we do not think mutagenesis is feasible in the timescale and scope of this revision. Mutagenesis would require genome modification, and to our knowledge Halothiobacillus cannot grow heterotrophically. While Halos do possess an additional Form II rubisco, this can only sustain growth under conditions of elevated CO2 (>5%). It is known from various knockout experiments that elimination of the carboxysomal rubisco, knockouts abolishing the carboxysome, or even compromising the integrity of the shell lead to high CO2 requiring phenotypes (Baker et al., J Bacteriology 1998;Cai et al., Life 2015;Cai et al., PloS ONE 2009;Desmarais et al. Nat. Microbiol. 2019). Therefore, mutation to the carboxysomal rubisco can have far-reaching effects on the cell that could also affect carboxysomes and the CCM in ways we cannot easily characterize, beyond the obvious potential effects on activity. For all these reasons, rather than attempting an undirected mutagenesis screen we have answered the reviewer's questions with additional wild-type analysis.
Specific comments: Line 36: An additional reference for consideration here is the recent review by Hennacy and Jonikas (Annual Review of Plant Biology), further highlighting the importance of transplanting such systems to a non-native host. We have added this reference. Thank you for pointing it out to us! Line 40-41: the alpha-cbx of Halo is a 'representative' of cbx's found also in cyano's and other chemoautotrophs? Thank you for noticing this. We have adjusted the text to read "The α-carboxysome (CB) is a microcompartment responsible for carbon fixation in many cyanobacteria and chemoautotrophs", and moved our reference to H. neapolitanus to the last paragraph of the introduction.
Line 44: 'could potentially', is the evidence for the Rubisco-S2 interaction inconclusive?
We have adjusted the language to read "A third abundant component, CsoS2, is a disordered scaffold protein that binds both the shell and Rubisco and is responsible for rubisco's encapsulation." The evidence for the interaction is strong, but to our knowledge these clusters have not definitively been proven to contain CsoS2 nor have other contributors been conclusively ruled out. Line 56: 'non-equilibrium bicarbonate pool'. This is not clear for those who do not understand CO2 chemistry. Perhaps expand to indicate that there is specifically a disequilibrium between CO2 and bicarbonate, in favour of bicarbonate, within the cytoplasm?
We have adjusted this language and expanded the paragraph as suggested. The modified text now reads: "To circumvent these problems, some autotrophic bacteria employ a CO2 concentrating mechanism (CCM). In the CCM, bicarbonate is actively pumped into the cytosol and then diffuses through the semi-permeable protein shell into the carboxysome where carbonic anhydrase converts it to CO2. In this way, high local CO2 concentrations are provided to the encapsulated Rubisco, maximizing its turnover and outcompeting oxygenase activity." Line 58-61: Models including Kaplan, Badger, Mangan, Long etc. seem to suggest that encapsulation is sufficient. However, all these models are based on unknown diffusional limitations of the shell. Therefore, the primary unknown is the real diffusional resistance of the shell and does not appear to necessitate allosteric regulation of kinetics. We fully agree with the reviewer that these models suggesting encapsulation as being sufficient also make strong assumptions about shell properties. Yet, the biochemistry of CsoSCA -which is thought to be post-translationally regulated -suggests that not all mechanisms are captured in the current models. We believe that one critical missing item from this is enzyme scaffolding and/or positioning inside the CB, which could compensate for a shell with less diffusional resistance than is generally modeled. However, recent results in the biological condensates literature (e.g. Michael Rosen and colleagues 2021) suggest that enzyme scaffolding can also have allosteric effects making us believe it is important not to rule out this possibility. To begin to study this in a structural fashion, we attempted a comparison of Rubisco fibrils vs. monomeric complexes, but the resolution (7 A) was not sufficient to rule out the small conformational changes typically associated with allostery. This entire paragraph has been extensively revised, and we have changed the mention of allostery to "other regulation".
Line 66-67: What evidence is presented to suggest this unique feature leads to the rate of CO2 fixation? Comparison of active site turnover numbers for the different packing classes of carboxysomes would provide this information. Purification of numerous carboxysome preparations from cells grown under a variety of conditions may lead to different proportions of packing types, therefore enabling such analysis. Correlation of packing type and site-dependent Rubisco turnover rates would clarify if packing arrangement led to differential (and possibly allosteric) regulation of Rubisco function. We have adjusted the language describing the effect of lattice formation to read, "This feature of the alpha-carboxysome may facilitate high-density encapsulation of Rubisco without compromising enzyme activity." Unfortunately purification appears to have an effect on the proportion of sparse/dense/ordered CBs, which makes a robust in vitro biochemical analysis of packing effects impossible. The liquidity of Rubisco within the CB would also make it difficult to draw conclusions from such an experiment. Figure 3B to white, to better bring out the non-lattice Rubiscos. 4) Included Extended Movie 1, which tilts this representation to better bring out the packing. We hope these changes will better contextualize Figure 3B. Angstrom offset which may correlate very well with observed packing in rods, possibly contributing to rod-formation by an iterative extension of shell formation and Rubisco organization. Again, this highlights an opportunity to assess this via mutational analysis to manipulate the Rubisco interacting interfaces. In addition, how conserved are the proposed interacting interface residues? It would be safe to assume that there should be a high degree of conservation in these regions of Rubisco large and small subunits. We agree with the reviewer that the connection between sequence evolution and fibril formation is highly interesting! We have performed an analysis of conservation in Rubisco, with attention to the Rubisco-Rubisco interface (Ext. Fig. 5). Briefly, we find that the small subunit interaction site is one of the least conserved areas of the Rubisco complex. The interaction region does not have a clear set of residues that would be physically responsible for this interaction (Ext. Fig. 5), making it difficult to perform a meaningful mutational analysis in the absence of sequence conservation. The large standard deviation of our twist angle also suggests that, while our pictured interaction is dominant, elimination of it would simply result in a subtle shift rather than a significant disruption of the lattice, which would be difficult to identify outside another full highresolution analysis (~1.5 years of effort).
We believe that this is an example of a known tendency of D4 proteins to polymerize at high concentrations based on the non-specific interaction of a small number of charged residues coupled with symmetry effects (Garcia-Seisdedos et al, Nature 2017). Based on this principle and the results of this study, a mutational analysis would be inconclusive because the interaction would simply shift to use other charged residues in the vicinity. This is likely the reason why multiple twist angles of the rubisco complexes are found in both our data and in the crystal structures.
We do observe occasional carboxysomes in vivo that have an elongated morphology (new Figure 6D); unfortunately these do not survive purification. The elongated CBs have Rubisco in layers, which can be either polymerized or not. We also do not observe either direct Rubiscoshell interaction or a set orientation with Rubisco relative to the shell ( Figure 5B), which is consistent with the likely tethering of Rubisco to the shell by CsoS2 (a disordered protein that would not be expected to give a strong angular preference). Carboxysomes display a wide range of heterogeneity both within H. neapolitanus and in other bacterial species, and there is not currently sufficient understanding of packing morphologies to enable targeted mutagenesis to alter it. In work beyond this paper, we intend to investigate these questions with cellular tomography and hope to have a more satisfying answer in a future publication. We have added these indicators. The CsoSCA is not based on structure but rather consistent appearance with expected size and shape, so we have labeled it as "unidentified protein density" in the figure legend and included a text description of the possible proteins that could be consistent with this size and shape.
Lines 122-129: This short paragraph seems to take some time to let the reader know that S2 is possibly the link to Rubisco here. Perhaps it could be written more succinctly and definitively. We have rewritten this paragraph to focus less on the technical reason for not resolving the density, and to instead describe the uncertainty in identifying the binding partner. The paragraph now reads: "Despite this order, no rigid scaffold is observed holding the Rubisco lattice together. An average of adjacent fibrils shows faint densities between the large and small subunits of adjacent Rubiscos (Fig. 3D), suggesting a flexible, sub-stoichiometric binding partner may be present in the lattice. CsoS2 and carbonic anhydrase are both known to bind Rubisco through low-affinity interactions with disordered peptides. CsoS2, in particular, has the appropriate disorder, length and Rubisco binding sites to serve this role. The Rubisco termini have also been suggested to participate in intermolecular interaction, which could provide an alternative route for assembly." Lines 174-180: Is there an observed Rubisco:CsoS1 interface? Would observation of rod-like cbx's offer any clues?
We do not observe a direct interface between Rubisco and the shell in any tomograms in vitro or in vivo, including in elongated morphology CBs in vivo. This is consistent with previously published in vivo studies by the Jensen lab and biochemical data from the Savage lab (Oltrogge et al. 2020). We suspect that there is an indirect interaction via a linker protein, most likely CsoS2, which would be consistent with the area adjacent the shell always being occupied first.
Line 202-203: a large 'enzyme' consistent in size and shape with a 20S proteasome? No evidence is presented for what appears to be a disordered aggregate of protein, I assume in the first image in Ext. Fig 5. The enzyme refers to Ext. Fig. 5B-C; and we have adjusted this language to indicate a "large protein complex".
Ext. Fig. 5A is the aggregate that we refer to in these lines of text; we have adjusted the figure reference to specify 5A and 5B-C for the different clauses in the sentence. The aggregate is a feature occasionally observed but with no known function or identity. Similar dense spots are found in vivo, but have not been assigned a protein identity or role (Iancu ... Jensen, 2010 JMB). CsoS2 would make the most sense as a disordered protein component known to be present at high concentrations in the carboxysome.
Line 223-225: Note that b-cbx have been observed as paracrystalline in cyanobacteria (Kaneko et al, via TEM), despite current knowledge that the interior of these cbx's should be more motile. This raises the question of whether the observations here and in many circumstances might result as artefacts from the sample preparation process or if they are indeed real. This is where further analysis could be extremely helpful. For example, can specific Rubisco mutations still enable entrapment of the enzyme in carboxysomes but eliminate organised structure at high internal concentrations? We agree that care needs to be taken to avoid the same controversy as happened in beta carboxysomes. We have answered the question of sample preparation artefacts by collecting an additional in vivo dataset, which confirms that the ordered CBs are found in vivo (Figure 6, Extended Data Table 1, Ext. Fig. 6E).
Importantly, the loose lattice we observe is non-crystalline, and the ordered phase always coexists with at least some Rubisco complexes that do not participate and would remain motile. We believe the lattice retains the ability to reorganize due to the low Rubisco-Rubisco affinity that allows the long fibrils to break easily unless held in place by neighboring fibrils, which was recently echoed in a preprint from the Zhang lab showing that packing morphology can be changed by extreme non-physiological concentrations of calcium. Our second order tensor analysis (Figure 3) is borrowed from liquid crystal theory, which is likely the best way to describe the ordered phase. We have added a sentence in the discussion to clarify this (2nd paragraph): "It is likely that the CB interior can reorganize, given the wobbly Rubisco-Rubisco interaction (visible in the non-zero bending angle, which requires at least 2 of the 4 small subunit interaction sites to be unbound)." Line 246-247 is a key outcome here. It implies that an evolved, preferential high-concentrationdependent packing will still maintain Rubisco function under 'saturating' carboxysome filling conditions. This should be emphasized, and it implies a calculable upper limit of Rubisco concentrations within carboxysomes which is driven entirely by the observed arrangement. How does this observed Rubisco packing compare with what has been described as 'Kepler packing' as an estimate of maximum Rubisco quantities in both alpha and beta carboxysomes? This was a very interesting question. Because the carboxysome lattice is twisted, it cannot tile space indefinitely to create a unit cell definition, so we performed an analysis of occupied space in a 6-fibril area (the number of fibrils that we use to separate dense and ordered morphologies). This analysis gave an upper bound (Kepler packing) for Rubisco density equivalent to roughly 1.1 mM holoenzyme, while our calculated Rubisco concentrations in the CB go up to roughly 900 µM. We have added this analysis to the paper text. For review only, we also include the below results confirming that the coarse-grain model from Kepler packing is in strong agreement with the atomistic analysis we performed.
Fig. R1
Extended Fig 1. Is it possible to delineate, on these data plots, the apparent 'classes' of cbx (dense, sparse, ordered)?
We have made this adjustment and turned all the Ext. Fig. 1
plots into stacked histograms.
Reviewer #2 (Remarks to the Author): "Rubisco forms a lattice inside alpha-carboxysomes" Reviewed by Manon Demulder and Benjamin Engel Metskas et al. use cryo-electron tomography (cryo-ET) to provide the first high-resolution visualization of the alpha-carboxysome, which has remained rather enigmatic compared to the better understood beta-carboxysome. Surprisingly, the authors observe linear fibrils of Rubisco within some of the isolated carboxysomes. They then generate a subtomogram average of this Rubisco at a fairly impressive 4.5 Å resolution to show that fibril formation is apparently mediated by interactions between the Rubisco small subunits. The polymerization of Rubisco into fibrils is a novel and interesting result, which is of high interest to the whole field of carbon concentration/fixation. The transition from Rubisco liquid condensate into ordered fibril is also of broad interest to the field of phase separation, and may even carry potential interest all the way out to fields such as neurodegeneration, which study these liquid-to-solid state transitions. However, there is one major issue with this study, which should either be addressed with additional experiments, or be acknowledged with careful rewording of the text and its interpretations.
Major issue:
A major limitation of this study is that the cryo-ET was performed on isolated carboxysomes. Some features such as the Rubisco fibrils are likely also present within the cell, as I cannot imagine how isolation of carboxysomes would induce fibril formation. However, the isolation procedure and subsequent blotting onto EM grids are unquestioningly disruptive--this is clearly shown by the aggregate of broken carboxysome shells in Fig. 5D. Furthermore, I know from my own experience that larger isolated structures (e.g., centrioles) are very easily compressed when they are blotted and frozen into a thin layer of ice on an EM grid. This compression is unavoidable, and is often dramatic. Compression would almost certainly impact the organization of Rubisco within the isolated carboxysomes, especially given the flexibly/disorder of the CsoS2 linker. It could also detach Rubisco in the carboxysome interior from the layer of Rubisco more tightly bound to the carboxysome wall. The authors conclude that "the most consistent feature of the CBs is their compositional and structural heterogeneity" (Line 200), but with the present data, it is not possible to say whether this is a result of carboxysome biology or an artifact of the sample preparation. Specifically, the isolation and blotting of carboxysomes could potentially affect the following analysis presented in this study: Fig. 6 and Ext. Fig. 6E).
The reviewers bring up valid points about the limitations of studying a purified system. We agree with the reviewers that the most likely artefact is a decrease of order and potentially leakage of Rubisco, rather than the lattice which is the primary finding of the paper. To address this issue we have now recorded cryo-tomograms of intact H. neapolitanus cells and inspected the arrangement of Rubisco within carboxysomes in vivo. Again we observed clear Rubisco fibrils, as well as less-ordered but still densely-packed arrangements (see new
In addition, we note that unlike the beta-carboxysome and other complex Rubisco-filled organelles, the alpha-carboxysome has a rich history of purification and in vitro function; most of our biochemical understanding of the alpha-carboxysome is derived from purified microcompartments, so purification passes the most important test of structural biology (continued function of the sample). Compression forces from blotting are indeed a concern but one that is manageable, as evidenced by the history of structural virology in cryo-EM which are samples of similar size scale and arguably more mechanically delicate. We have adjusted the text to account for these potential effects as well as emphasize the functional nature of purified alpha-carboxysomes.
Further, other in vivo datasets at lower resolution may be found in previous Jensen lab publications, and are qualitatively consistent with our findings. Manual observation of orthoslices in Iancu et al. and Jensen, JMB 2010 is consistent with the Rubisco distance from the shell and lack of defined angle relative to it (the Rubisco-shell distance is not discussed in our manuscript, but is observable in the orthoslices throughout the figures). We have adjusted the manuscript text to include better references to published in vivo datasets from H. neapolitanus.
1) The relative abundance of Rubisco fibrils (Figs. 2A, 4A) Line 94: "Roughly one third of the carboxysomes displayed ordered packing"; Line 131: "The ratio of bound and free Rubisco concentrations scales with Rubisco concentration in the CB ( Figure 4A)." -> Are these numbers representative of the biology or the sample preparation?
We have adjusted the text to indicate that this is in vitro, and that the sparse packing may partially result from purification. We have also added in vivo numbers for the morphology comparison (Extended Data Table 1).
2) The bending angle and twist between Rubisco complexes along a fibril (Fig. 4C) Line 140: "There is a bending angle of 3 ± 2.5 degrees and a 14-degree standard deviation in the twist between stacked Rubiscos." -> Is this fibril bending and twisting present in the cell, or is it an artifact of compression forces caused by the blotting?
3) The higher-order packing of fibrils into a twisted hexagonal lattice (Fig. 3) Line 117: "Rendering all Rubisco fibrils within a CB displays the ordered phase: a loose hexagonal lattice-like ultrastructure of Rubisco fibrils twisting about each other with six-fold pseudo-symmetry ( Figure 3B). The fibrils are held at a distance of 12.5 ± 0.7 nm with a tilt of 10 ± 3 degrees 119 ( Figure 3D)." -> Again, is this lattice spacing and tilt affected by the blotting forces? The orientation of the fibrils relative to the compression forces between the air-water interfaces is random, so it's physically unlikely to drive a specific bending angle or twist that is consistent among all these orientations. We have added this to the text of the results (in vitro reference) and methods (relative angle of fibrils to compression and shearing force vectors, included in a new paragraph of potential sample preparation effects relative to data analysis).
The most likely artefact caused by an aligned force on randomly oriented samples would be to increase the noise of the measurement. We have adjusted the results text to reflect this is in vitro, and the new methods text expands on the point that purification and/or blotting may increase the noise in this analysis.
4) The packing and orientations of non-fibril Rubisco in carboxysomes (Extended Figs. 1, 3) Line 95: "Other CBs displayed a range of Rubisco concentrations, from sparse to dense (Extended Data Figure 1). A nearest-neighbor Rubisco alignment search across all CBs revealed a concentration dependence: Rubisco orientation is random at low concentration (sparse CBs), but becomes increasingly non-random as concentration rises (dense CBs, Extended Data Figure 3)."; Line 205: "Finally, even in the most ordered alpha-carboxysomes, a substantial portion of the Rubisco does not participate in the lattice ( Figure 3B). This heterogeneity is consistent with models of CB assembly, which are based on phase separation and binding affinities rather than a tightly ordered, regular assembly mechanism" -> Is the observed heterogeneity in Rubisco concentration and organization representative of biological variability or artifacts from the isolation and blotting? We have confirmed our findings with in vivo data; please see the revised manuscript. Spatial heterogeneity in Rubisco packing is present in previous in vivo studies by the Jensen and Chiu labs on multiple bacterial species and is the most likely reason that the lattice went unnoticed before now. We have adjusted the text near original line 205 to reflect the in vitro nature of our results and their consistency with previous publications. Rubisco concentration (original line 95) has already been addressed above and the text has been adjusted accordingly there -and in the methods -to reflect the possible effect of in vitro study. 5) Rubisco organization and concentration next to the carboxysome shell compared to the carboxysome interior (Fig. 5) Line 174: "The Rubiscos adjacent to the shell occasionally participate in the long fibrils, but not consistently, and appear to behave differently than Rubisco in the CB interior."; Line 176: "the layer is well-populated even in otherwise sparse CBs. We analyzed the angle of the Rubisco C4 axis relative to the shell and found a predominantly random distribution… ( Figure 5B)." -> Is this distinction in the organization of shell-adjacent Rubisco also seen within native cells, or have forces from the isolation and blotting detached Rubisco fibrils in the carboxysome interior from the layer of Rubisco more tightly bound to the carboxysome wall? The lack of defined angle relative to the shell, and the inconsistent participation in lattices, are present in vivo as well. Please see the new Figure 6 and Ext. Fig. 6E for orthoslices. This is also consistent with current models of alpha-carboxysome biogenesis.
6) Proposals about carboxysome assembly/maturation mechanisms (Discussion) Line 219: "Sparse carboxysomes lack the concentration necessary to form an ultrastructure, but the dense and ordered packings overlap in Rubisco concentration. All packing types display polymerization of the Rubisco; therefore, the primary distinction between dense and ordered CB packing is whether the fibrils align"; Line 226: "Therefore, the phase transition between dense and ordered packing may involve the CB maturation trajectory." -> Are we really seeing a maturation-driven phase transition, or are some carboxysomes more disrupted than others? Existing in vivo datasets show a lack of paracrystalline order in the alpha-carboxysome (see cited works from the Jensen and Chiu labs), so a spectrum of order is consistent with these data. This is also found in our in vivo data, which show both dense and ordered CBs inside cells. In our response to comment 4 above we added text on in vivo heterogeneity that will also address this.
Line 219 is simply a description of our classification and a reminder to the reader that all polymerization is inherently concentration-dependent and thus a concentration threshold to Rubisco ultrastructure would be expected on a physical basis. Line 226 was speculative and we have replaced it with, "but the relationship between spatial heterogeneity, mobility and chemical environment remains unproven in this system".
I am surprised and a bit disappointed that the authors did not complement this analysis of isolated carboxysomes with some bona fide in situ data of FIB-milled Halothiobacillus cells. This approach is becoming mainstream for leading cryo-ET labs, and indeed, the last author of this study has access to the necessary FIB/SEM instrumentation, as evidenced by their recent publications: Swulius et al. 2018Swulius et al. https://doi.org/10.1073Martynowycz et al. 2019Martynowycz et al. https://doi.org/10.1016Martynowycz et al. /j.str.2018Zhang et al. 2020Zhang et al. https://doi.org/10.1126Carter et al. 2020Carter et al. https://doi.org/10.1126Mageswaran et al. 2021Mageswaran et al. https://doi.org/10.1101Mageswaran et al. /2021Nicolas et al. 2022Nicolas et al. https://doi.org/10.1101Nicolas et al. /2022 Regardless of issues associated with the isolation procedure, the Rubisco fibrils appear to be real, and they are a novel finding of high interest to the field. As far as I can tell, the subtomogram averaging performed on these fibrils is methodologically solid, providing some clues into how the fibrils may form via interactions between the small subunits. Thus, I see two possible routes to publication: 1) The authors explicitly describe this work as an in vitro study of isolated carboxysomes. They emphasize the aspects that likely remain true in these isolated conditions, such as the Rubisco-Rubisco structural interactions that establish the fibrils, while the other analyses that may be affected by the isolation (outlined above) should be carefully qualified.
2) The authors add complementary in situ cryo-ET from FIB-milled cells. This does not have to be an extensive dataset, but in situ analysis would be highly valuable to confirm the in vitro observations and add reliable information about the prevalence of different Rubisco organizations within native cells. I think route #2 is definitely preferred and will significantly increase the impact of this study. However, if the authors opt for route #1, I would not object to publication as long as the conclusions are properly qualified.
We have now both added in vivo cryo-ET data and adjusted the text for purified CBs. We agree with the Reviewers that the original results reported in the reviewed draft were not in vivo and have clarified that language throughout, and we have now highlighted the possible problems associated with purification and discussed them. We have also now recorded tilt-series of intact cells to confirm all the major findings (shown now in the new Figure 6 and Ext. Fig. 6E). While we agree that a FIB-milled dataset would be ideal, one of sufficient quality and quantity to address the reviewers' concerns as to tilt, twist, etc. is not feasible for us in the timescale and context of this revision.
Minor issues and requested text changes: Line 22, Abstract: "Here, we use cryo electron tomography and subtomogram averaging to determine an in situ structure of Rubisco at 4.5 Å" -> This is certainly NOT an in situ structure. In situ means within the original natural environment. Specifically regarding cryo-ET, in situ means inside the cell, as has been demonstrated in many previous cryo-ET studies of intact small cells or larger cells that have been thinned with a cryo-microtome or by focused ion beam milling. The Rubisco subtomogram average in this study is derived from isolated carboxysomes, which are not in their native state. We have adjusted the text to read "a 4.5 A structure of Rubisco inside the alphacarboxysomes".
Line 29, Abstract: In the abstract, it is claimed that the insights into Rubisco fibril formation "provide future directions for bioengineering of microcompartments." -> In the discussion section, the authors should provide some detail on how Rubisco fibrils open these new bioengineering opportunities. Conversely, if it is too early to predict these possible applications, this claim could (should!) be removed from the abstract. We have added this to the discussion (final sentence, 3rd paragraph from the end). It's likely that including a weak-affinity polymerization in cargo enzymes could assist their incorporation at high concentration. The Rubisco concentrations inside the CBs can approach Kepler packing (see Reviewer 1 response), which is not typically seen in liquid-liquid phase separation alone (the current CsoS2 model in the field).
Line 66, Introduction: "This unique feature of the alpha-carboxysome likely helps to increase the rate of carbon fixation inside the CB." -> How do the authors know that Rubisco polymerization is a unique feature of alphacarboxysomes? Beta-carboxysomes also contain a pseudo-crystalline lattice, which has not yet been studied at high resolution. Also, the authors observe different degrees of order with similar Rubisco concentrations, so fibrils aren't simply a mechanism for increasing the concentration of Rubisco within the carboxysome (Line 219: "the dense and ordered packings overlap in Rubisco concentration"). Thus, without mechanistic studies specifically disrupting the fibrils, it is probably too early to conclude that Rubisco polymerization likely helps increase the rate of carbon fixation.
We have replaced the text in question with "This feature of the alpha-carboxysome may facilitate high-density encapsulation of Rubisco without compromising enzyme activity." Line 117, Results: "Rendering all Rubisco fibrils within a CB displays the ordered phase: a loose hexagonal lattice-like ultrastructure of Rubisco fibrils twisting about each other with six-fold pseudo-symmetry ( Figure 3B)." -> How often was this hexagonal lattice observed in the dataset of 139 carboxysomes? Figs. 3A and 3C appear to analyze many carboxysomes, but the prevalence of this lattice is not clear to me. The figure legend claims that Fig. 3B is a representative example. In the supplement, please show at least four more examples of such hexagonal lattices in other carboxysomes. We apologize for confusion on this point. The lattice is how we define the "ordered" carboxysomes, so the prevalence is a little less than half the dataset though the size/number of concentric layers varies. We have adjusted the text (paragraph starting at original line 94) to better link the ordered morphology classification with the lattice. We have also included the additional orthoslices in the Extended figure as requested and a table with numbers. In keeping with best practice in biophysics, all analyses are performed on all data, as described in the methods.
Line 119, Results: "The fibrils are held at a distance of 12.5 ± 0.7 nm with a tilt of 10 ± 3 degrees ( Figure 3D)." -> Is the packing and helical tilt between neighbor fibrils always the same, or are there variations between carboxysomes? If not done already, please quantify this for the full dataset (It's not clear to me whether these numbers are averaged from every hexagonal lattice in the dataset or just from the example in Fig. 3B). This analysis is for the entire dataset, otherwise the data would not be sufficient to justify use of Gaussian distributions in the analysis due to the small number of fibrils per CB. We have adjusted the methods text to better reflect this. We do not observe quantifiable variation between carboxysomes, but it's possible that this is due to the combination of a loosely-ordered lattice and a relatively small number of both dependent and independent datapoints rather than a lack of biological variance between CBs.
Line 124, Results: " We hypothesize that a disordered, lower-occupancy binding partner may maintain a maximum distance and promote tilt between fibrils. Such a linker would likely not be visible in a subtomogram average due to the combination of low occupancy and disorder, especially if there were also disorder in the binding site, causing the system to act as a 'fuzzy complex'. CsoS2 has the appropriate disorder, length and Rubisco binding sites to serve this role." -> Wouldn't a low-occupancy disordered linker promote significant flexibility between the Rubisco fibrils instead of this specific spacing and tilt? Perhaps there is a more physical packing explanation for the parameters of the observed hexagonal lattice?
We have adjusted the text to remove the mention of maximal distance and tilt, as well as other items requested by the other reviewer. Physically, a maximal distance would suffice for our distributions, as the minimal distance is provided by Rubisco itself and Brownian motion would take care of the rest within that confined space. However, we agree that the tilt is more difficult to explain. We have not identified a physical principle that would explain the tilt in the absence of an organizing partner to at least maintain a specific distance between the chains, and a simulation would require parameterization beyond what current knowledge can justify.
Line 165, Results: "Rubisco crystal structures also display variable interactions in this region. H. neapolitanus Rubisco structures 1SVD and 6UEW both show alternative longitudinal interactions in the crystal structure involving helix 3 (Extended Data Figure 4). Both crystal structure interactions lie within the range represented in our data, suggesting these conformations may be sampled in vivo (though not the dominant conformation)." -> Two comments here. First, please be careful with the wording because isolated carboxysomes are not in vivo. Second, it would be helpful to provide some context for these two PDB structures. 1SVD is an apo structure, whereas 6UEW is bound by CsoS2 N-peptide, but this is not clear from the text. We have adjusted the text to include context for the crystal structures.
We apologize if this text is not clear. We are stating that the Rubisco-Rubisco interaction inside the crystal structures is also present in purified carboxysomes, and thus is likely not a crystal packing artefact. (Please also note we do not say "representative" but "sampled", which allows for shifts between purified CBs and CBs inside cells. We have added extensive references to the purified nature of the dataset in response to other points in this review, so this will be clear to the reader in the revised manuscript.) Line 187, Results: "7 Å C1 interior Rubisco subtomogram average" (shown in Fig. 5C) -> This Rubisco average appears to be filtered to a resolution that is too high, resulting in noisy spikey extensions all over the structure. Likely the 7 Å resolution is overestimated, and the map density should be lowpass filtered to a lower resolution (maybe 15 Å?) for more accurate display. I don't think this will negatively affect the point that the authors are trying to make here. We agree this map is over-sharpened; it's a known and inherent problem in post-processing C1 subtomogram averages at lower resolutions (for another recent example of this known issue, see Qu ... Briggs, Science 2022). The local resolution in this slab view is also lower than the global resolution due to its location on the exterior of the enzyme (also commonly seen). We have shifted Figure 5C to a simple low-pass filtered average (unsharpened, all data), and removed references to a specific resolution to avoid confusion between local and global.
Line 195, Results: "We did however observe many shell-attached densities in the tomograms, which are best visible in tomograms of broken CB shells ( Figure 5D). Some of these densities are the correct size for carbonic anhydrase, which also is capable of binding Rubisco and was previously found to be shell associated" -> Correct me if I am wrong, but didn't Blikstad et al. 2021 find that the carbonic anhydrase binds Rubisco and not the shell? I quote from that study: "This screen showed that CsoSCA interacted with Rubisco, while none of the other carboxysome proteins had detectable binding above background." Could these shell-attached densities instead by aggregates of CsoS2, which does interact with the shell? Note that rupture of carboxysomes and resulting exposure of CsoS2 to the buffer would likely result in aggregation. We have added the possibility of CsoS2 aggregate to the text, as well as the rubisco activase which is probably more likely (IDP aggregation does not typically yield a specific, dense shape of limited molecular weight). While the Blikstad 2021 paper does indeed indicate Rubisco binding rather than shell interaction, the "S" in CsoSCA stands for shell due to earlier papers (cited in the text) indicating historical observations of binding. It's possible that all these papers may be correct, as the PPI data may not perfectly mimic the chemical environment inside the carboxysome (which may also shift over time).
Line 202, Results: "roughly 5% of CBs also contain a large enzyme that is consistent in size and shape with a 20S proteasome (Extended Data Figure 5)." -> This is very interesting, but it doesn't look like a 20S proteasome to me (I stared at many proteasome tomograms during my postdoc). Without a structural average, it's just guesswork, and I understand that there aren't enough particles for a good average. But… It looks quite similar to this archaeal chaperonin (See Fig. 1 https://doi.org/10.1016/j.str.2011.03.005). Thank you for pointing this out to us! You're right that this looks like a better fit. We have adjusted the text to remove the reference to the 20S proteasome. We did not specifically mention the chaperonin lest we fall into the same trap, but we will continue to look for these and attempt a low-resolution average in the future if we can compile enough particles.
Line 218, Discussion: "We observed three Rubisco packing types in the CBs: sparse, dense and ordered ( Figure 2A)." -> For clarity, it would be very helpful to include a small supplemental table detailing the number of carboxysomes in the tomograms that were ruptured / intact sparse / intact dense / intact ordered. This information could also be incorporated into Extended Fig. 1, to help readers understand how parameters such as Rubisco concentration correlate with the above classes. We have incorporated this into Ext. Fig. 1 as requested, and added Extended Data Table 1 for both in vitro and in vivo datasets.
Visibly ruptured carboxysomes are occasionally present in the sample but as broken shells ( Figure 5D); they are intentionally avoided during data collection so an unbiased estimate cannot be made. Rubisco attached to empty shells was not analyzed. (NB: If the sparse CBs do represent Rubisco leakage/loss, which we agree is possible, the most likely explanation for the lack of visible shell rupture is that the shell reassociates prior to freezing.) Line 213, Discussion: "Recently, tomographic analysis of the algal pyrenoid has shown that Rubisco behaves in a liquid-like fashion." -> The authors could consider additionally citing the recent mechanistic study by He et al. (https://doi.org/10.1038/s41477-020-00811-y). We have incorporated this citation, thank you for pointing it out to us.
Line 229, Discussion: "Polymerization is a highly effective packing strategy, and does not obstruct the Rubisco active site nor CsoS2 and CsoSCA binding sites." -> Without getting overly speculative, the authors could expand on this idea a bit to discuss what physiological advantages Rubisco fibril formation could bring. Also, do the authors expect other alpha-carboxysomes to form Rubisco fibrils? What about beta-carboxysomes, which have also not been studied yet at high resolution? We have reorganized this part of the discussion to better emphasize the advantages while avoiding too much speculation as we cannot knock out the polymerization. We believe this is a "do no harm" mechanism for maximizing Rubisco concentration without sterically choking the enzyme. Other systems may not need this mechanism, as it may be specific to alphacarboxysome assembly where the Rubisco condenses in the cytoplasm at near-maximal concentrations. Because the H. neapolitanus Rubisco forms polymers in crystal structures, we examined other Rubisco crystal structures for these packing interactions. This phenomenon seems to be specific to Form 1A Rubisco.
Beta-carboxysomes are quite interesting. It's important to note that liquid crystal or paracrystalline packing does not necessitate fibril formation; they may indeed have this, but without a fibril-containing crystal structure or a resolved interaction site in situ it would be too speculative to discuss here.
Line 482, Methods: "Rubisco subtomogram average unfiltered half-maps and sharpened full map will be deposited in the EMDB at time of publication. Raw tomogram movie frames may be accessed through the Caltech Electron Tomography Database (https://etdb.caltech.edu/) upon publication." -> Please also deposit example tomograms in the EMDB and raw tomogram movie frames in EMPIAR. The Caltech ETDB is great, but EMDB/EMPIAR remains the main community repository for now, so it should also receive copies of the data. We will additionally deposit example tomograms and frames in EMDB/EMPIAR as requested. This paragraph will be updated with accession numbers and other pertinent information during final preparation of an accepted manuscript.
|
v3-fos-license
|
2018-05-15T23:57:36.245Z
|
2015-11-05T00:00:00.000
|
46886601
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=61338",
"pdf_hash": "c4d291b499e5354be015c908ef6a8da310761926",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46572",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "c4d291b499e5354be015c908ef6a8da310761926",
"year": 2015
}
|
pes2o/s2orc
|
Characterization of KPC , NDM and VIM Type Carbapenem Resistance Enterobacteriaceae from North Eastern , Nigeria
Introduction and Aim: Carbapenem resistance among species of Enterobacteriaceae has emerged as a global public health problem that adds to the high cost of care, severity and high mortality of otherwise straightforward infections. Governments around the world are devoting efforts to combat this important threat. The present study was undertaken in our setting to detect and characterize carbapenem resistance among Enterobacteriaceae. Methodology: Confirmed species of Enterobacteriaceae isolated from 225 patients that were admitted in various units of University of Maiduguri Teaching Hospital (UMTH) Maiduguri were screened for carbapenem resistance with meropenem and ertapenem disc (10 μg, Oxoid, England) using clinical and laboratory standards institute (CLSI) breakpoints. Suspected carbapenemase producers were subjected to confirmation using Modified Hodge Test method. Detection of the carbapenemase genes was done by multiplex PCR using KPC, NDM-1 and VIM primers. Results: A total of 225 clinical isolates of Enterobacteriaceae comprising 73 (32.4%) of Klebsiella pneumoniae, 61 (27.1%) of Escherichia coli, 21 (9.3%) of Proteus mirabilis, 18 (8.0%) of Klebsiella oxytoca, 13 (5.8%) of Morganella morganii, 12 (5.3%) of Citrobacter freundii, 12 (5.3%) of Serratia marcescens, 7 (3.1) of Enterobacter aerogenes, 3 (1.4%) of Klebsiella ozaenae, 3 (1.4%) of Hafnia alvei and 2 (0.9%) of Citrobcter sedlakii were isolated. A total of 28 (12.4%) of the isolates screened positive as carbapenemase producers. All the 28 screened isolates were further subjected to confirmation using the Modified Hodge Test for which 23 (10.2%) were confirmed resistant. Therefore a prevalence of 10.2% for carbapenem resistance was recorded in this study. Based on multiplex polymerase chain reaction, the various percentage genotypes of the carbapenemase producers were: 11 (47.8%) for KPC, 2 (8.7%) for VIM while 5 (21.7%) isolates have co-existence of the NDM-1 and VIM genes. However, 5 (21.7%) of the iso-
Introduction
Gram negative bacteria that produce the extended spectrum beta lactamase enzymes capable of hydrolysing most cephalosporins have been reported worldwide [1].The carbapenems, namely: imipenem, meropenem, ertapenem and doripenem became the antimicrobials of last resort used in treating infections due to these highly drug resistant bacteria [2].These antimicrobial agents became crucial in the management of life threatening healthcare associated and community acquired infections.The consequence of this widespread use of carbapenems was the emergence of the first carbapenemase producing Enterobacteriaceae (CRE) in 1993 [1].
The major risk factors for acquiring these CRE were: organ or stem cell transplantation, intensive care unit admission, poor nutritional status, severe illness, mechanical ventilation, prolonged hospitalization and previous surgery [3].The centre for disease control and prevention (CDC) recommends rapid action for CRE in terms of proper/timely identification and the institution of aggressive preventive measures to combat this emerging threat [4].The present study was undertaken to detect and characterize CRE from a major reference centre in northeastern Nigeria using both the phenotypic and genotypic methods.
Study Area
University of Maiduguri Teaching Hospital (UMTH), Borno State.The principal referral center of northeastern Nigeria.
Study Design
The study was a hospital based, descriptive and cross-sectional.
Study Population
The study was carried out on patients hospitalized in UMTH from the following wards; medical ward (male and female), surgical ward (male and female), special care baby unit (SCBU) and the intensive care unit (ICU).
Study Period
The study was carried out from June 2014 to December 2014.
Sampling Method
Convenient (non-probability type) sampling.
Specimen Collection
The isolates of Enterobacteriaceae were obtained from the following specimens; blood, urine, cerebrospinal fluid, stool, and swabs of patients with invasive diseases (i.e.bloodstream infections, catheter related infections, ven-tilator assisted infections, etc.)All specimen were collected and transported according to standard methods [5].
Bacterial Identification
The specimen were inoculated on MacConkey agar.After 24 -48 hours of aerobic incubation at 36˚C -37˚C, colonial appearance and characteristics of isolates on MacConkey agar was noted and they were then subjected to Gram staining reaction and also motility testing according to standard methods [5].All suspected isolates of Enterobacteriaceae were confirmed by the Microbact Gram negative identification system™ (Oxoid) 24E according to the manufactures instructions.
CRE Screening and Phenotypic Confirmation
All the bacterial isolates were screened for carbapenemases according to the CLSI guidelines [6].Ertapenem and meropenem discs (10 μg, Oxoid, England) were used.The antibiotic discs were placed on the surface of inoculated Mueller Hinton Agar plates using a sterile forceps.The discs were placed about 30 mm apart and the plates were incubated for 24 hours at 37˚C after which zones of inhibitions were read.
Isolates that showed a zone of inhibition ≤ 21 mm in diameter for meropenem and/or ertapenem were considered as suspected carbapenemase producers.Escherichia coli ATCC 25922 was used to quality control the screening testing [6].
All Enterobacteriaceae isolates which were found resistant to Meropenem (10 μg) disc and or Ertapenem (10 μg) disc, either alone or in totality were subjected to confirmatory testing using the Modified Hodge Test [6].
A 0.5 McFarland standard of Escherichia coli ATCC ® 25922 was inoculated in saline and diluted to 1:10 in saline.This suspension was then evenly inoculated with a sterile cotton swab on surface of Mueller Hinton Agar plate.A disc of Ertapenem (10 μg, Oxoid, England) was placed on the surface of Mueller Hinton agar plates at the centre.Thereafter, by means of a sterilized wire loop, the test organism was streaked together with the two quality control organisms (Klebsiella pneumoniae ATCC ® BAA-1705-MHT positive and Klebsiella pneumoniae ATCC ® BAA-1706-MHT negative) in a straight line out from the edge of the Ertapenem disc.The plates were then incubated at 37˚C for 24 hours.Positivity for carbapenem was inferred when there is the appearance of a clover leaf type indentation or flattening at the intersection of the test organism and Escherichia coli ATCC 25922 within the zone of inhibition of the carbapenem susceptibility disc as described by Anderson and recommended by CLSI [6] [7].Klebsiella pneumoniae ATCC ® BAA-1705-MHT positive and Klebsiella pneumoniae ATCC ® BAA-1706-MHT negative were used as positive and negative controls for MHT respectively [6].
DNA Extraction
The Quick Extract™ bacterial DNA extraction kit was used to extract DNA from bacterial cells using a solid phase enzymatic method.A loopful of the organism was harvested from fresh colony of the bacterial species grown on MacConkey agar.DNA extraction was subsequently done according to manufacturer's instructions [8].Extracted DNA was then eluted from the columns in 100 μL of elution buffer and stored at −20˚C for further PCR analysis.
Primer Sequence
PCR analysis for beta lactamase genes of the family KPC, NDM and VIM was carried out.Primers were obtained from Bioneer, Inc. Company, USA.The resistance genes blaKPC, blaNDM and blaVIM were amplified by PCR using previously published primers [9]- [12].
Target gene
Primers Sequences Amplicon size (bp)
DNA Amplification
The prepared PCR products with master mixture were placed in the eppendrof thermal cycler.Amplification was carried out according to standard thermal and cycling condition.
Data Analysis
Data collected was recorded into a computer and analysis was be done using statistical package for social sciences version 16.0 (SPSS Chicago Ill.USA).Results were presented when necessary as tables, figures, diagrams and photograph.
Ethical Consideration
The study protocol was reviewed and approved by the ethical review committee of UMTH.
The carbapenem susceptibility status of Enterobacteriaceae following preliminary screening with meropenem and ertapenem were as shown in Table 1.
All the 225 species of Enterobacteriaceae were screened for carbapenem resistance using meropenem and ertapenem disc.22 (9.7%) of the various species of Enterobacteriaceae were resistant to meropenem while 28 (12.4%) were resistant to ertapenem.
The 28 isolates of Enterobacteriaceae that tested positive to the screening test were subjected to confirmatory test using MHT as shown in Table 2.A total of 23 (82.1%) of the 28 (100%) isolates tested positive using the MHT. Figure 1 shows the distribution of carbapenem resistance among species of Enterobacteriaceae following the Modified Hodge Test.
All the 23 isolates were further characterized for their molecular genotype by the use of PCR.Using DNA primers for KPC, NDM-1 and VIM genes; a total of 11 (47.8%)KPC and 2 (8.8%)VIM genes were detected in 18 (78.3%) of the 23 (100%) isolates.However, 5 (21.7%) of the isolates have NDM-1 and VIM gene coexisting together.No gene was detected in 5 (21.7%) of the isolates.Figure 2 shows the photograph of PCR products of carbapenemase genes.KPC gene was detected at 785 bp, NDM-1 at 550 bp and VIM at 382 bp genes.The DNA ladder was set at 100 bp.The distribution of the carbapenemase genes among various species of Enterobacteriaceae is shown in Figure 3.
Discussion
The occurrence of carbapenems in the north-eastern part of Nigeria was established from this study.Following preliminary screening with meropenem and ertapenem, a resistance of 22 (9.7%) and 28 (12.4%) were reported respectively.This was similar to a prevalence reported by Yusuf et al. in Kano among isolates of Enterobacteriaceae to imipenem and meropenem of 10.5% and 12.5% respectively following phenotypic screening from surgical and intensive care units [13].The same author reported a 7.4% resistance to imipenem and 87.5% resistance to meropenem among Enterobacteriaceae using the same study design from the same institution but in a different patient population [14].
This showed that with difference of sample population even within the same centre, a different resistance pattern might be shown as was the case from this study.The epidemiologic significance of the finding for carbapenems resistance following preliminary screening as observed from this study confirms the existence of carbapenems resistance from our locality.Hence, this calls for ongoing surveillance of this resistance threat in our healthcare setting.The confirmatory test (Modified Hodge test) detected 23 (10.2%) as carbapenemase producers out of the 28 screened carbapenem resistant Enterobacteriaceae.This means that a prevalence of 23 (10.2%) was detected from this study.This was slightly lower than a prevalence of 14.0% recorded in Kano by Yusuf et al. among species of Enterobacteriaceae with the highest prevalence found among Klebsiella pneumoniae (16.7%), followed by Proteus species (16.0%), and Escherichia coli (12.5%) while no carbapenemases were detected in Serratia species [14].A much lower prevalence of carbapenemases of 0.15% was reported by Jones in Israel using the MHT [15].
The finding of a high prevalence of carbapenemases in Klebsiella pneumoniae from this study agrees with the finding of Landman [16] who reported that over one-third of Klebsiella pneumoniae collected in 2006 in New York, USA carried the carbapenemase enzyme.Several studies in the US have reported carbapenem resistant Klebsiella pneumoniae as the specie most commonly encountered there [17].
Although the study area; UMTH Maiduguri, Borno State, north-eastern Nigeria does not have a regular prescription pattern for carbapenem probably due to their high costs nevertheless a relative high prevalence of CRE was detected.This was not surprising because numerous studies have shown that prior carbapenem therapy is not a pre-requisite for carbapenem resistance among Enterobacteriaceae [18].However, it is important to note that substantial amount of patients that attend UMTH Maiduguri are usually being referred from other hospitals in the region, probably this resistant pathogens were imported from other regions of Nigeria or from neighboring countries due to increase in local and international travels.
In this study, the occurrences and molecular features of the carbapenemase genes; KPC, NDM-1 and VIM in the north-eastern part of Nigeria were established.The finding of NDM-1 from this study is a worrisome fact, this is because NDM-1 has the ability to spread, unlike any resistant mechanism that has ever been seen in clinical microbiology [19].KPC gene was the predominant carbapenemase gene detected in this study.Although, KPC gene was reported to be the predominant carbapenemases among Enterobacteriaceae worldwide, [20] Spellberg and colleagues reported that NDM-1 was predominantly common in survey of Enterobacteriaceae they did in 2009 but in 2010 they reported KPC as the most predominant in UK [21].
The co-existence of NDM-1 and VIM genes were noted from this study.The clinical significance of this finding is that patients having organisms possessing this double genes are more likely to have multi drug resistance and more likely to have the propensity for widespread nosocomial transmission.In the present study, 5 (21.7%) of the isolates had none of the genes detected in them.This implies that other factors, other than the presence of KPC, NDM-1, VIM were effective in producing resistance to carbapenem group of antibiotics.
Conclusions
The present study highlights the existence of carbapenem resistance among Enterobacteriaceae in a center that does not routinely use carbapenem antibiotics.The carbapenemase producing pathogens were simultaneously In view of this emerging drug resistance, the practice of routine CRE testing along with conventional antibiogram would be useful for all cases which will help in the proper treatment of the patient and also prevent further development of bacterial drug resistance.The role of infection control measures and antibiotic stewardship programs cannot be overemphasized.
Figure 1 .
Figure 1.The distribution of carbapenem resistance in species of Enterobacteriaceae following the modified hodge test (N = 23).
Figure 3 .
Figure 3.The distribution of the carbapenemase genes among species of Enterobacteriaceae.
and ertapenem disc using the disc diffusion method.
|
v3-fos-license
|
2024-06-30T15:19:32.282Z
|
2024-06-27T00:00:00.000
|
270841987
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/ijerph21070839",
"pdf_hash": "6f677d5df103d6afb258fbbc974ccba53d5a7ebe",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46574",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"sha1": "1c217475e50e8f037c982b50f4d3462626a4b512",
"year": 2024
}
|
pes2o/s2orc
|
Bullying and Cyberbullying in School: Rapid Review on the Roles of Gratitude, Forgiveness, and Self-Regulation
This study aims to assist decision-making in anti-bullying interventions by highlighting the importance of positive factors such as gratitude, forgiveness, and self-regulation in mitigating the negative impacts of bullying/cyberbullying. The objective was to examine and synthesize available evidence on the impact of gratitude, forgiveness, and self-regulation practices in the school context regarding bullying/cyberbullying phenomena. Three databases were consulted (Web of Science, Scopus, and Scielo), and the results include 14 articles. The three character strengths were associated with psychological well-being, life and school satisfaction, improved mental health, increased likelihood of engaging in pro-social behavior, and reduced involvement in bullying/cyberbullying situations. These strengths have the potential to enhance overall well-being and decrease risk behaviors, leading to more positive outcomes in experiences of violence. These results underscore the importance of considering students’ individual strengths and the possible interventions to promote healthy school environments.
Introduction
Adolescence is a developmental phase characterized by various biological, behavioral, and psychosocial changes.It can be a tumultuous and confusing period in a person's life, where contradictory and inconsistent feelings may manifest as aggressive behaviors, such as school bullying or cyberbullying.Bullying involves the repeated and intentional expression of aggressive behaviors in peer relationships based on a power imbalance between victims and aggressors [1,2].With increased access to the internet, the phenomenon has shifted to the online realm, and is now referred to in this space as cyberbullying [1].It is observed that many cyberbullying victims are also victims of traditional bullying, indicating that cyberbullying creates additional victims [1].
According to the National School Health Survey (Pesquisa Nacional de Saúde do Escolar), 2019 edition, in Brazil, approximately 12% of students engage in bullying at school, and 23% are victims [3].Regarding the prevalence of cyberbullying, around 13% of students feel threatened, offended, or humiliated on social media or on applications on their cell phones [3].The consequences of bullying and cyberbullying can be categorized into three main areas: educational consequences, health consequences, and consequences that extend into adulthood [4].
Recent studies suggest that positive personality traits, or character strengths, may play a crucial role in mitigating the negative impacts of bullying and cyberbullying.These traits are thought to foster pro-social behavior and reduce involvement in such negative activities.However, research on the topic (bullying and cyberbullying) has heavily focused on diagnosing the phenomena and seeking associations with psychosocial variables.The association between non-cognitive and cognitive personality traits and bullying/cyberbullying has not been a research topic.In this regard, especially positive and desirable personality traits may be associated with higher levels of pro-social engagement or non-involvement in bullying/cyberbullying situations.It is in this direction that this review focuses on three character strengths to analyze the issues of bullying and cyberbullying.
Character strengths are positive personality traits that help individuals engage in morally valued behaviors [5].They are considered "values in action", as each strength is related to the application of a specific virtue and reflects the psychological mechanisms that promote its practice.Everyday life is facilitated by the expression of character strengths through behaviors that impact people's social experiences [6].There are a total of 24 character strengths grouped into 6 essential virtues: (1) creativity, curiosity, judgment, love of learning, and perspective (virtue of wisdom and knowledge); (2) bravery, perseverance, honesty, and enthusiasm (virtue of courage); (3) love, kindness, and social intelligence (virtue of humanity); (4) teamwork, justice, and leadership (virtue of justice); (5) forgiveness, humility, prudence, and self-regulation (virtue of temperance); (6) appreciation of beauty and excellence, gratitude, hope, humor, and spirituality (virtue of transcendence) [7].
Gratitude and forgiveness, specifically, are interpersonal strengths that promote wellbeing, happiness, and increased pro-social behavior by eliciting a combination of reflection, positive emotions, adaptive behaviors, and social relationships.These strengths share empathy as a common psychological component [6,8].Individuals reporting higher levels of gratitude and forgiveness tend to report less anger and feelings of loneliness, as well as fewer depressive symptoms [5,6,9].These individuals also report greater acceptance, empathy, and self-compassion.Self-regulation, in turn, is associated with healthier lifestyles and a reduction in adopting risky behaviors [5].The selection of these three character strengths was based on their relevance in promoting positive behaviors and reducing involvement in bullying and cyberbullying.
While many studies have explored protective factors in bullying behavior, including forgiveness, gratitude, and self-regulation, there is still a need to synthesize and highlight these aspects comprehensively.We aim to examine and synthesize available evidence on gratitude, forgiveness, and self-regulation practices in the school context regarding bullying/cyberbullying phenomena.The hypothesis to be tested in this review considers that the presence and development of specific personality traits (gratitude, forgiveness, and self-regulation) may be associated with a decrease in involvement in bullying or cyberbullying situations.As this is an exploratory review, no specific roles of students (e.g., victims, aggressors, or bystanders) or the effects of these characteristics on one role or another are particularized.It is expected that a literature review will describe how these aspects are studied and the general understanding related to these factors.
Study Type
This is a rapid review characterized by the application of an accelerated process in the search and synthesis of knowledge on a specific topic [10].This type of review is no less systematic than other types, and is useful for gathering evidence to facilitate primary clinical decision-making.The Cochrane guidelines for conducting rapid reviews were followed [11].In this review, the following steps were applied: definition of the research question; definition of eligibility criteria (including time limitations to be considered and the number of databases to be consulted); search, selection, and data extraction; and data analysis and the construction of interpretative synthesis [10,11].
Guiding Question
To construct the guiding question for the review, the PCC strategy (population, concept, and context) was used [12]: How do character strengths such as gratitude, forgiveness, and self-regulation in adolescents interact or influence the dynamics of bullying/cyberbullying in school settings?
Search Strategies
The search was conducted in three databases: Web of Science, Scopus, and Scielo.Search terms and cross-references were developed to capture publications that addressed the proposed objective: gratitude AND bullying; forgiveness AND bullying; self-regulation AND bullying.The terms were also used in Portuguese on Scielo.The strategies employed in each database are described in Table 1.The results were exported to the Rayyan platform [13].Initially, each abstract and title were evaluated by one researcher who selected items to read in full.Full texts were also assessed by a researcher.Another researcher independently supervised and guided the corpus selection process.Data extraction was conducted by the two researchers, who regularly met to address concerns and ensure that data extraction was consistently carried out following the review's objective and guiding question, as well as the application of inclusion and exclusion criteria.
Inclusion and Exclusion Criteria
The review covered the last five years (2019-2023).This time limit was defined to reach the most current scientific literature and to follow the guidelines for conducting this type of review.Articles reporting empirical research were eligible if they focused on children and adolescents, addressing issues related to bullying or cyberbullying in schools.Only texts published in English, Spanish, or Portuguese were considered.Articles were excluded if they involved other populations (children, young adults, or adults, for example); focused solely on broad contextual factors related to the investigated phenomena; were opinion articles, protocols, or reviews; or concentrated on the conception or evaluation of instruments or interventions.
Data Analysis
The descriptive analysis involved systematically summarizing the key findings from each study.This included quantifying the frequency of certain variables and outcomes, categorizing the types of bullying and cyberbullying behaviors observed, and noting the prevalence of positive character strengths.The exploratory analysis aimed to identify and examine underlying patterns and relationships within the data that might not have been immediately evident through descriptive methods alone.This involved a more indepth examination of the interactions between different variables, such as how gratitude, forgiveness, and self-regulation correlated with bullying and cyberbullying behaviors.
Following the descriptive and exploratory analyses, an interpretative synthesis was created.This synthesis involved integrating the findings from the individual studies into a coherent narrative that emphasized the most important conclusions.The researchers critically evaluated the evidence and discussed its implications for understanding the role of positive character strengths in addressing bullying and cyberbullying.
Constitution and Characteristics of the Reviewed Corpus
The initial screening of titles and abstracts was conducted on the Rayyan platform, excluding duplicates (n = 49) and studies with results not pertinent to the review or involving other populations (n = 73) in the first instance.Thirty-three articles were selected in the final corpus selection to be read in full.The selection process is detailed in the PRISMA flowchart available in Figure 1.
prevalence of positive character strengths.The exploratory analysis aimed to identify and examine underlying patterns and relationships within the data that might not have been immediately evident through descriptive methods alone.This involved a more in-depth examination of the interactions between different variables, such as how gratitude, forgiveness, and self-regulation correlated with bullying and cyberbullying behaviors.Following the descriptive and exploratory analyses, an interpretative synthesis was created.This synthesis involved integrating the findings from the individual studies into a coherent narrative that emphasized the most important conclusions.The researchers critically evaluated the evidence and discussed its implications for understanding the role of positive character strengths in addressing bullying and cyberbullying.
Constitution and Characteristics of the Reviewed Corpus
The initial screening of titles and abstracts was conducted on the Rayyan platform, excluding duplicates (n = 49) and studies with results not pertinent to the review or involving other populations (n = 73) in the first instance.Thirty-three articles were selected in the final corpus selection to be read in full.The selection process is detailed in the PRISMA flowchart available in Figure 1.Five studies were conducted in Spain, two in China, two in Peru, and two in Mexico.Australia, Italy, and Turkey each contributed one study, and no research conducted in Brazil was found.Among the included studies, 13 were cross-sectional, and 1 had a longitudinal nature [14].The smallest recorded sample consisted of 43 adolescents, while the largest sample was from a study with 2.758 participants.Ages varied between 9 and 19 years in the studies.A variety of instruments were used to collect data, mainly to assess bullying or cyberbullying situations.Table 2 presents the descriptive data of the reviewed corpus.The interaction between peer victimization and forgiveness in self-esteem was significant.
For adolescents with low forgiveness (reactive), victimization had a significantly negative impact on self-esteem.For adolescents with high forgiveness (protective), the negative effect of victimization on self-esteem was even stronger.Regardless of forgiveness levels, peer victimization had a similar impact on subjective well-being.It was observed that gratitude was a variable present in various studies, often associated with other variables such as cyberbullying, mindfulness, life satisfaction, compassion, moral development, and pro-social behavior.Forgiveness was also a common variable frequently studied in relation to cyberbullying, well-being, self-control, revenge motivations, self-esteem, stress, and bullying.Among the variables of interest in the review, self-regulation-from the perspective of positive psychology-was the least identified.This initial analysis demonstrates that there is a diverse range of variables that can interact with important concepts for understanding the dynamics of bullying/cyberbullying, but further studies on the relationship with self-regulation are still needed.
Primary Outcomes Reported in the Studies
Considerable rates of victimization and the perpetration of bullying or cyberbullying were revealed.One of the included studies, for example, identified a rate of involvement in bullying situations of approximately 35% [22].Another study identified cybervictimization rates of 23% and cyberbullying practices of approximately 18% in a final sample composed of 979 Spanish adolescents [23].The results of the reviewed studies explore a wide range of factors related to bullying and cyberbullying, as well as the relationships established between these phenomena and gratitude, forgiveness, and self-regulation.It was found that, in many studies, character strengths act as mediators for other issues such as mental health frameworks, or the emission of pro-victim behaviors, for example.
A separation between bullying and cyberbullying data is necessary to better understand the specifics of the included studies.In this regard, it is observed that six studies presented data on bullying [18][19][20][21][22]24]. Students who observe bullying situations in schools, for example, and exhibit feelings of gratitude, forgiveness, compassion, or happiness tend to display more pro-social behaviors [18,22].One study also found that adolescents who have high levels of forgiveness, gratitude, and self-control are less likely to engage in bullying situations [19].This occurs because these characteristics help manage negative emotions and promote more positive and constructive responses to conflicts and challenges.On the other hand, students with high levels of victimization showed a greater desire for revenge, greater motivation for school avoidance, feelings of loneliness, and a more negative evaluation of their support networks [20].Students victimized in schools also tended to have fewer feelings of gratitude [24].Eight studies presented data on cyberbullying [14][15][16][17]23,[25][26][27].In these cases, students who reported being able to forgive or who had effective strategies for dealing with cyberbullying showed higher well-being despite the victimization they suffered [17].Cybervictimization was also significantly associated with stress and the motivation for revenge, as well as a lower availability to exercise forgiveness [23,25].Other results can be checked in detail, study by study, in Table 2.
Regarding gratitude, in general, it was demonstrated to play a significant role in promoting emotional well-being and combating the negative effects of bullying/cyberbullying.Additionally, gratitude correlated positively with pro-social behavior towards victims and acted as a mediator in the relationship between dimensions of emotional intelligence and cyberaggression, partially or fully explaining how these dimensions affect aggressive behavior.In this regard, adolescents who were victims of bullying, especially cyberbullying, were more likely to exhibit symptoms of depression.However, feeling gratitude was associated with fewer depressive symptoms among victims, especially girls.On the other hand, adolescents more sensitive to acts of kindness tended to be less involved in bullying behaviors.
It was evidenced that forgiveness was related to reduced levels of depression, lower involvement in revenge behaviors, and a lower likelihood of participation in bullying or cyberbullying situations.Furthermore, it was identified that the willingness to forgive positively influenced the emotional well-being and self-esteem of adolescents, even in cases of victimization.Forgiveness also had a significant impact on the experiences of bullying victims, influencing the desire for revenge, emotional loneliness, and perceptions of social networks.
Regarding self-regulation, it was observed that students who rarely expressed guilt and sympathy in cyberbullying events demonstrated moderate self-regulation.Moreover, being female was positively related to self-regulation, while the negative correlation between self-regulation and aggressive defensive behavior suggests that the better someone is at self-regulating their emotions and actions, the less likely they are to adopt an aggressive intervention when witnessing cyberbullying.
Another necessary separation in the data analysis concerns the role of students (victims, aggressors, or bystanders) in bullying or cyberbullying situations.In this regard, victims who reported being able to forgive or who had effective strategies for dealing with cyberbullying showed greater well-being despite the victimization suffered [17].For adolescents with a low ability to express forgiveness, victimization had a significantly negative impact on subjective well-being and mental health [20,21,23].In Italy, cybervictimized boys were less willing to forgive [25].Victimized students in Spanish schools also tended to have fewer feelings of gratitude [24].Regarding aggressive behaviors, studies reported that students with higher levels of forgiveness, gratitude, and self-regulation were less likely to engage in bullying or cyberbullying situations [16,19,23,25].For bystanders, it was also found that pro-social behaviors towards victims were associated with the ability to express feelings of gratitude, forgiveness, or compassion [18,22].
Some key results on gender differences should also be explicitly stated, even though this was not one of the purposes of this review.In this regard, overall, girls in the various studies reported possessing more positive aspects than boys (gratitude, forgiveness, and self-control, for example).For instance, one study found that, in general, girls tend to report higher levels of gratitude, forgiveness, happiness, and pro-social behavior when witnessing bullying situations in schools [18].Generally, boys were more aggressive and had less gratitude than girls [19].Boys who were victims of cyberbullying also showed less willingness to forgive [25].Additionally, boys who were able to forgive considered aggressive events less serious over time [27].Additional studies are recommended to better understand this dynamic from a gender perspective.
Additionally, in the analysis of these studies, it was verified how the studies ensured the validity and accuracy of the results.In this sense, it is observed that, as the studies mostly had a cross-sectional nature, the validity and accuracy of the results depend on various statistical practices and techniques.The researchers adopted appropriate statistical practices to assess the validity and accuracy of the results.They used a variety of statistical methods, including tests, correlation analyses, and modeling, while controlling for confounding variables and adopting a specific statistical significance level to assess the significance of the results (information contained in the Supplementary Materials).These practices are fundamental for conducting valid and reliable research.It is also noted that Table 2 describes the main results revealed in each study.
Interpretative Synthesis
This review presents a solid foundation on students' experiences of bullying and cyberbullying.Additionally, empirical data provide a relevant framework for understanding how values in action or character strengths in the individual field can, in terms of mental health primarily, change the outcomes of these experiences.It can be considered that this review adds information about phenomena that affect adolescent development and can be used in clinical interventions or within school contexts.It is also noted that the reviewed data are from local realities, but with social and experiential foundations that can be considered globally.
Specifically, regarding the guiding question of this study, it was noticed that gratitude has been explored as a potential factor to improve the quality of life and psychological state of students involved in bullying situations, especially those who are victimized either traditionally in school or online.Promoting the feeling of gratitude was also considered a valuable approach for preventing violence phenomena among adolescents.Regarding forgiveness, the studies revealed a lack of initiatives to help students use this feeling as a coping mechanism for bullying/cyberbullying or to improve well-being.Forgiveness is described in many studies as a "buffer" for the negative impacts of victimization, mainly.The absence of self-regulation, in turn, can make aggressors more vulnerable to mental health problems.Similarly, this absence inhibits pro-victim or defensive behaviors when other students witness the aggressions.
Furthermore, in the results, it is perceived that the potential of the three character strengths in relation to bullying and cyberbullying is not directly associated with the emission of violent behaviors.The associations are with improved well-being, self-esteem, increased life or school satisfaction, and the mitigation of mental health problems.From a comprehensive perspective, gratitude, forgiveness, and self-regulation can favor the adoption of pro-social behavior.This, indirectly, can reduce the occurrence of peer violence.The positive trend revealed indicates that as students become more willing to demonstrate gratitude, forgiveness, and self-regulation, there is less propensity to engage in bullying/cyberbullying behaviors.At the same time, these values in action can also reduce the negative impact of victimization.
These findings may have significant implications for interventions aimed at promoting positive attitudes and behaviors among adolescents.Promoting gratitude, forgiveness, and self-regulation could be an effective strategy to create healthier environments and, consequently, reduce relationship problems or harmful behaviors in adolescence.In this sense, according to the reviewed studies, anti-bullying interventions should consider the lived experiences of students and implement activities that favor the reconnection of students with their strengths.
Discussion
These findings underscore the importance of promoting gratitude as a potentially effective tool in preventing and mitigating the negative impacts of bullying and cyberbullying on adolescents' lives.In summary, the results highlight the role of forgiveness as a key factor in promoting emotional well-being and coping with challenges associated with bullying and cyberbullying in adolescence.Adolescents who reported more forgiveness, gratitude, and self-control were less likely to engage in aggression, and this should be considered in intervention programs.
Consistent with these findings from this review, other research on diverse themes has shown that presenting character strengths is a protective factor that ensures greater well-being and happiness, and reduces depressive symptoms [28,29].Developing character strengths also facilitates overcoming traumatic events and maintaining positive emotions or moods [9,29,30].In the case of bullying victims, according to the reviewed studies, expressing feelings of gratitude and forgiveness can decrease the negative impacts of victimization experiences.Improvements in mental health indices and a decrease in depressive symptoms were among the most significant data.
Gratitude has a beneficial effect, especially when evaluating subjective issues [28].A study with two independent samples of Israeli adolescents (total 505 participants) also revealed that expressing feelings of gratitude facilitated the emission of pro-social behaviors and increased peer acceptance [6].A systematic review that included 74 randomized clinical trials demonstrated that individuals subjected to gratitude-focused interventions improved their mental health and had fewer symptoms of anxiety and depression [29].It is suggested that gratitude can play a protective role against risk behaviors and health problems.
Undoubtedly, the most intriguing findings of this review involve the approach to forgiveness.While not advocating that victims should forgive their tormentors, the exercise of forgiveness seems to represent a less ambiguous behavioral response than gratitude and is more clearly related to other psychological phenomena.Furthermore, forgiveness may require more time to develop as the person regulates their aversive emotions (e.g., anger) and intentions of revenge or retaliation [8].
A remarkably similar pattern of findings suggested that forgiveness is related to better interpersonal relationships and more social engagement based on desirable social values for maintaining social order [9].Additionally, individuals with a greater willingness to forgive others tend to be physically healthier [5,9].Forgiveness is also essential for developing a sense of trust in relationships, and for bullying/cyberbullying victims this can be crucial, as victims tend to establish patterns of mistrust when in relationships with others, and this aspect extends to other life stages.
On the other hand, self-regulation tends to prevent mental health problems [28].Being a feeling that manifests more objectively in behavior, related to the ability to control impulses, regulate emotions, and maintain healthy behaviors, students with good self-regulation can manage the stress experienced in schools when involved in bullying/cyberbullying situations and can seek help when necessary.
It is also important to mention that studies on character strengths indicate that, even when experiencing adversities in childhood or adolescence (such as bullying or cyberbullying), possessing characteristics such as gratitude and self-regulation favors the maintenance of mental health [5,30].Character strengths can be considered as personal resources that contribute to well-being and quality of life, highlighting the importance of addressing not only external factors in anti-bullying interventions but also strengthening the positive personality traits of students.
The reviewed data also have practical implications.To effectively implement programs that nurture gratitude, forgiveness, and self-regulation among students, schools can adopt a variety of structured activities.For example, gratitude journals can be introduced where students regularly write about things they are thankful for, fostering a positive mindset and enhancing emotional well-being.Schools can also organize workshops and role-playing activities that teach students the importance of forgiveness, helping them to understand and practice resolving conflicts peacefully.Furthermore, self-regulation skills can be developed through mindfulness sessions and stress management techniques, enabling students to better control their impulses and emotions.Storytelling sessions where students discuss characters' actions in books or historical events can also highlight the importance of these virtues.These strategies not only promote a healthier school environment, but also equip students with essential life skills, contributing to their overall development and reducing incidences of bullying and cyberbullying.
Finally, although this review has many strengths, its results should be interpreted considering its three main limitations.Firstly, the reviewed studies are limited to seven countries, and conclusions about the effects of gratitude, forgiveness, and self-regulation in bullying and cyberbullying situations cannot be generalized to different countries and cultures.Secondly, the psychological well-being results measured in the included studies were not the same since the measurement tools used were not identical.Additionally, this review focuses on personal character strengths, and other variables that contribute to the occurrence of bullying/cyberbullying, such as social and contextual factors, were not considered.The review also considered the situation in different countries, but intercultural differences were not considered in the data analysis.This could be the subject of further investigation.Despite these limitations, this work broadens the perspective on phenomena that affect the health and development of students, contributing to the current body of knowledge produced.
Conclusions
In summary, the conclusions of this study provide some evidence about the potential of character strengths to promote individual well-being, especially when analyzing painful experiences of bullying or cyberbullying in adolescence.Other researchers are encouraged to leverage the findings of this review in the context of schools in different countries and cultures.Empirical research evaluating the presented findings could be useful for psychologists and may capture culturally sensitive aspects that were not identified in the reviewed studies, for example.
Figure 1 .
Figure 1.Diagram of the search process and selection of articles for the review (PRISMA).Figure 1. Diagram of the search process and selection of articles for the review (PRISMA).
Figure 1 .
Figure 1.Diagram of the search process and selection of articles for the review (PRISMA).Figure 1. Diagram of the search process and selection of articles for the review (PRISMA).
Table 1 .
Search strategies applied in databases.
Table 2 .
Identification and characteristics of the reviewed corpus.
|
v3-fos-license
|
2019-04-15T13:12:25.776Z
|
2017-06-08T00:00:00.000
|
114338086
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://eejournal.ktu.lt/index.php/elt/article/download/18334/8797",
"pdf_hash": "1af4a80e6d8b5321b5eedb365615facea76ea094",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46575",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "1af4a80e6d8b5321b5eedb365615facea76ea094",
"year": 2017
}
|
pes2o/s2orc
|
Impact of Complexity and Compression Ratio of Compression Method on Lifetime of Vision Sensor Node
The energy budget is limited in remote applications of Wireless Vision Sensor Network (WVSN). It imposes strict constraints on both the processing energy consumption and transmission energy consumption of the Vision Sensor Nodes (VSN). Transmitting the raw images to the Central Base Station (CBS) consume substantial portion from the energy budget of each of the VSN. The consequence of greater transmission energy consumption due to transmitting raw images is the reduced lifetime of the VSN. Image compression standards effectively reduce the transmission energy consumption by compressing the images. But the computational complexity of a compression method has also a significant effect on the energy budget and lifetime of each of the VSN. This paper investigates the impact of the computational complexity and communication energy consumption of three chosen compression methods on the lifetime of the VSN. Both statistically generated images and the real captured images are used for evaluating the energy consumption of the three chosen image compression methods. We have determined the improvement in the lifetime of the VSN based on the computational complexity and compression ratio of the three selected binary image coding methods. DOI: http://dx.doi.org/10.5755/j01.eie.23.3.18334
I. INTRODUCTION
The inaccessible applications of the Wireless Vision Sensor Network (WVSN) imposes strict constraints on both the execution and transmission energy consumption of the Vision Sensor Nodes (VSNs).Hardware components of the typical VSN includes a camera, onboard processing unit, memory and a radio transceiver.The energy budget of WVSNs is limited because of their deployment in inaccessible zones where it is difficult to change the position of the node or to regularly update the batteries.
One application of WVSN is sky surveillance for the detection of birds/bats which fly towards wind mills/turbines [1], [2].Other applications includes automatic meter reading [3]- [5] and target detection/tracking [6]- [8].Our intended application is the automatic monitoring of hydraulics systems for failure detection [9].We can predict the health of the hydraulic system based on the number and dimensions of the magnetic particles in the images captured and processed by the VSNs.
This diverse set of applications demand a significantly large number of VSNs for continuous and persistent monitoring.Cabling the sensor network for powering the VSNs and for communication with the server for such remote applications is difficult and costly.Hence, for remote applications of WVSN, the placement of battery operated VSN is essential.
The image processing flow from image capturing up to feature extraction includes many complex algorithms including filtering, background-frame subtraction, segmentation, morphology, labelling, object dimension/features extraction and image compression.The VSNs must be able to perform these complex tasks using onboard processing unit and needs to be able to communicate (wirelessly) the final results.However, unluckily, they have inadequate energy budget in the form of batteries initially installed at deployment time.A lesser energy resource in the form of batteries put hard restrictions on the kind of the hardware apparatuses used and the algorithms for the various image-processing tasks.Typically, the preference is for hardware components with low power consumption and algorithms, which have low computational complexity.The energy budget and wireless communication are the major constraints of remote applications of WVSN.
Both in-node processing and transmission (wireless) to server consume a huge share from the energy budget of the VSN.Communicating the images from the node without innode processing decreases the processing energy but its consequence is greater transmission energy because of communicating the huge information contained in images.
On the contrary, performing the entire processing using on-board processing unit and communicating the end results does reduce the transmission energy consumption.But, its drawback is greater execution energy consumption based on the longer processing time at the VSN. Figure 1 shows these two extremes of processing at the VSN.We have previously concluded in [10] and [11] that the choice of an appropriate strategy for Intelligence Partitioning (IP) between the server and VSN does reduce the over-all node's energy consumption.
Impact of Complexity and Compression Ratio of Compression Method on Lifetime of Vision
Sensor Node However, transmitting the uncompressed images to server will quickly drain the total node's energy.Transmission energy consumption is largely reliant on the data that is being transmitted between the VSN and the server.Compressing the bi-level image after preprocessing and segmentation proves to be a decent alternative for achieving a general architecture for some applications of WVSN [12].The general architecture from [12] is presented in Fig. 2 which show that the remainder of the operations, such as bi-level image processing operations, labelling and object features extraction are shifted to the server.
The size of the compressed image in Fig. 2 is dependent on the used compression method.Additionally, the VSN's energy consumption is dependent on the processing complexity of the underline compression algorithm.We determined in [13] that JBIG2 [14], CCITT Group 4 [15] and Gzip_Pack [16] are appropriate binary image compression standards for inaccessible applications of WVSN.
In the current work, we are interested in determining the improvement in lifetime of the VSN based on the reduction in communication energy consumption which can be achieved by using any of these three suitable image compression methods.Our analysis is based on NGW100 mkII which is an AVR32 based architecture.The NGW100 mkII kit uses the AT32AP7000 which has a 32-bit digital signal processor.The kit has 256 MB Random Access Memory (RAM) and 256 MB NAND flash.The AT32AP7000 operates at 150 MHz clock.
The rest of the article is planned as follows.Section II describes the related work and Section III presents the experimental setup.Section IV elaborates the execution time, the energy consumption and the improvement in the lifetime of the VSN based on the three compression methods.Finally, the conclusion is provided in Section V.
II. RELATED WORK
Representative examples of WVSNs are explained in [17]- [19].The authors in [17] developed a mote for camera based wireless sensor network.They analysed the processing and memory limitations in current mote designs and have developed a simple but powerful platform.Their mote is based on a 32-bit ARM7 micro-controller operating at 48 MHz and reading up to 64 KB of on-chip RAM.The IEEE 802.15.4 standard has been used for wireless communication.
The authors in [18] presented a CMUcam3 which is a cheap, open-source, embedded computer vision platform.Their hardware platform composed of a frame buffer, a colour CMOS camera, a cheap 32-bit ARM7TDMI microcontroller and memory card.
The authors in [19] proposed and demonstrated a wireless camera network system which they named as CITRIC.Their hardware platform consists of a camera, a CPU (which supports up to 624 MHz clock speed), 64MB RAM and 16MB FLASH.Their designed hardware is capable of performing in-network processing of images in order to reduce the transmission energy consumption.
III. EXPERIMENTAL SETUP
Our intended application is magnetic particle detection in a flowing liquid in hydraulic system.The prototype hydraulics system and the proposed flow of the image processing tasks is shown in Fig. 3.The particles are categorized by the dimensions and number and our system is intended to be used for the detection of failure in hydraulic systems.
Image capture: The image of the magnetic particles from the round glass in Fig. 3 is captured in this step.
Pre-Processing: The current frame is subtracted from the stored background.A predefined threshold is used for segmenting the image into black and white image.In this thresholded image, the magnetic particles are the white objects while the background is black.Image Compression: Any of the three selected binary image coding methods can be used for the image compression.The dotted lines in Fig. 3 shows that any of the three binary image coding standards can be used.The goal is to analyse the impact of the three image compression methods on the energy consumption and eventually the lifetime of the VSN.The compressed images are transmitted to the server for performing the rest of the image processing tasks.
IV. THE PROCESSING TIME AND ENERGY CONSUMPTION
This section consists of the discussions related to the energy consumption and execution time of the three bi-level image coding methods.The execution file and respective libraries of all the three compression standards are downloaded to the target embedded platforms (NGW100 Mk II) and are used to compress the images.The mean compressed file size for the three selected binary image coding standards from seven analysed methods in [13] are shown in Table I.
The compressed file size is different for the various compression methods and the communication time of the radio transceiver is dependent on the size of the transmitted data.In our previous work [10]- [12], the data transmission time for the IEEE802.15.4 radio transceiver is determined using (1) 802.15.4 (X 19) 0.000032 0.000192.
In our current work, IEEE802.15.4 is considered for the transmission of data from the VSN, thus, the same equation is used to calculate transmission time in Table I.The constants in (1) are based on CC2520 wireless transceiver.The packet structure is composed of 133 octets where 127 octets are for the frame length and 6 octets are the PHY header.The 0.000032 in (1) is the execution time of one byte while 0.000192 is the minimum Inter-Frame Separation (IFS) period.The size of the compressed image is more than the maximum available packet size of the wireless transceiver.So, the compressed bits stream of the compression methods is transmitted in several packets.
Table I shows that the transmission energy consumption is highest for transmitting segmented images and is lowest for JBIG2 compression.Though the transmission energy of JBIG2 is lowest but its compression time is highest due to high computational complexity of underline algorithm i.e. arithmetic coding.The Compression time and the total energy consumption of the compression methods is shown in Table II.The total processing time in third column of Table II includes the compression time as well as the time for capturing, pre-processing and segmentation.Contrary to Table I, the total energy consumption of JBIG2 is high compared to Group 4 and Gzip_Pack and the reason for this is the high compression time.High total energy consumption of JBIG2 will result in reduced lifetime of the VSN.The WVSNs are subject to data losses where the packet and sometimes the whole image could be lost.The acknowledgment of every packets must be received and in case of failure, the packets should be retransmitted otherwise the image will not be decompressed at the receiver side.As our goal in current paper is to explore the compression performance and energy consumption of the compression methods, which will help us to decide which compression, method is the most suitable for WVSNs.We have not tested all the compression methods in the actual physical deployment, which is not in the scope of the current work, so the discussion about the actual losses and their recovery is out of scope of the current work.The lifetime of the VSN in Fig. 4 is calculated using 4 AA batteries.Figure 4 shows that the lifetime of the VSN is the lowest for the case when the segmented binary image is transmitted to the server.On the other hand, the lifetime of the VSN is highest for the case when segmented image is compressed using CCITT group 4 compression method.For high sampling time in Fig. 4, the lifetime of the VSN is not increased because of sleep energy dominancy.It is true that there may be fluctuations in the current during the compression process, but we have determined the average value and still the variation in the current is very small.Hence, the average value of the current is a good measure.The voltage will also vary with the passage of time, but as long as the node remains functional, the difference in the voltage will be very small and hence our measured energy consumption will provide a good analysis.The transmission energy is calculated using the transmission time and the power characteristics of the IEEE802.15.4 standard.
V. CONCLUSIONS
We have explored the influence of processing time, compression efficiency and energy consumption of the three binary image compression methods on the life time of the VSN.Though the compression efficiency of JBIG2 is the highest but due to its long compression time, the total energy consumption is high.Higher total energy consumption of JBIG2 resulted in reduced life time of the VSN.The life time of the VSN is comparable for Gzip_Pack and CCITT Group 4 compression methods.Reduced lifetime of the VSN is a characteristic that is not desirable for most applications of Wireless Vision Sensor Networks (WVSNs).Hence, CCITT Group 4 and Gzip_Pack are the suitable candidates for the energy constrained applications of WVSNs.
Fig. 2 .
Fig. 2. Proposed architecture for the implementation of the VSN.
Fig. 3 .
Fig. 3.The imaging flow for particle detection in hydraulic system.
The two extremes of image processing tasks in WVSN.
TABLE I .
TRANSMISSION ENERGY CONSUMPTION.
TABLE II .
TOTAL PROCESSING TIME AND ENERGY CONSUMPTION.
TABLE III .
TOTAL TIME AND ENERGY CONSUMPTION (PROCESSING + TRANSMISSION).
Table
III shows the total time and total energy of the various compression methods.In TableIII, the total time in the second column is the sum of processing time and transmission time from TableIIand Table I respectively.Similarly, the total energy in the third column of Table III is the sum of the processing energy and transmission energy consumption from Table II and Table I respectively.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.